International / Regional Body

Organisation for Economic Co-operation and Development (OECD)

15TOTAL
8OFFICIAL SOURCES
11TOPIC AREAS
Law / Act2
Policy / Guidance2
National Strategy2
Standard / Framework1
International Agreement1
Working Paper1
Court Case2
Other4
Agentic AiAntitrust & CompetitionCopyright & IpCybersecurityData Privacy & ProtectionGenerative AIJudicial & Law EnforcementLabor & WorkforceLiability & AccountabilityNational StrategySafe & Responsible Ai
Court Case✓ Official

Osman Medical Centre v. Santé Québec

Arbitrator used Unidentified in proceedings before the . Fabricated: Case Law | As reported, arbitral award contains fabricated citations Outcome: (Set-aside pending).

Party: ArbitratorTool: Unidentified
1 November 2025Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available
Court Case✓ Official

Orano Mining v. Niger (2)

Lawyer used Implied (by me) in proceedings before the ICSID Tribunal. Fabricated: Case Law | Citing a disqualification decision from Antin v. Spain that does not exist Outcome: Arguments ignored.

Court: ICSID TribunalParty: LawyerTool: Implied (by me)
26 August 2025Judicial & Law EnforcementGenerative AILiability & Accountability
No link available
Policy / Guidance✓ Official

OECD releases Due Diligence Guidance for Responsible AI

The OECD has approved the "Due Diligence Guidance for Responsible AI," which aims to assist enterprises in implementing the OECD Guidelines for Multinational Enterprises and the AI Principles, focusing on responsible AI development and use. This guidance outlines a due diligence framework consisting of six steps: embedding responsible business conduct (RBC) into policies, identifying and assessing adverse impacts, ceasing and mitigating those impacts, tracking implementation, communicating actions taken, and providing remediation when necessary. It emphasizes the importance of stakeholder engagement, particularly with workers and affected communities, to ensure that AI systems are developed and deployed responsibly, addressing potential harms while fostering innovation and economic growth. The guidance serves as a tool for multinational enterprises involved in the AI value chain, promoting coherence with existing international and national AI risk management frameworks and encouraging proactive risk management to enhance trust and competitiveness in the global market.

26 January 2026Generative AI
↗ Link available
National Strategy

UN and OECD sign MoU on AI

The United Nations Office for Digital and Emerging Technologies (ODET) and the Organisation for Economic Co-operation and Development (OECD - OCDE) have signed a Memorandum of Understanding (MoU) to enhance cooperation on artificial intelligence. Areas of cooperation under the MoU include joint research and analysis to support regular science- and evidence-based assessments of AI opportunities and risks, as well as other activities leveraging the organisations’ respective strengths. It builds on recent policy developments, including the Global Digital Compact, updated OECD AI Principles (OECD.AI), and the Global Partnership on AI (GPAI).

23 April 2025National Strategy
↗ Link available
Working Paper✓ Official

OECD releases report on intellectual property issues in AI trained on scraped data

This report examines recent developments at the intersection of AI and intellectual property rights, with a particular focus on data scraping practices. It provides an overview of the role of data scraping in AI training, current legal frameworks and stakeholder perspectives, as well as preliminary considerations and potential policy approaches to help guide policymakers in navigating these issues and facilitate a greater understanding of data scraping.

9 February 2025Copyright & IpData Privacy & Protection
↗ Link available
Law / Act

OECD launches new reporting framework for monitoring application of Hiroshima AI Process (HAIP) international code of conduct for organizations developing advanced AI systems

The OECD has launched the reporting framework for monitoring the application of the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems. The reporting framework encourages transparency and accountability among organizations developing advanced AI systems. The results will facilitate transparency and comparability of risk mitigation measures and contribute to identifying and disseminating good practices. The framework is a direct outcome of the G7 Hiroshima AI Process, initiated under the Japanese G7 Presidency in 2023 and further advanced under the Italian G7 Presidency in 2024. It builds on the Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems.

7 February 2025Agentic AiGenerative AI
↗ Link availableSecondary source
Policy / Guidance

OECD releases paper on assessing potential future AI risks, benefits and policy imperatives, including the top 10 policy priorities

The OECD has released a report which reviews research and expert perspectives on potential future AI benefits, risks and policy actions. It features contributions from members of the OECD Expert Group on AI Futures ("Expert Group"), which is jointly supported by the OECD AI and Emerging Digital Technologies division (AIEDT) and Strategic Foresight Unit (SFU). Among other findings, the report identified 66 potential policy approaches to obtain AI benefits and mitigate risks, with the top 10 policy priorities being: (1) establish clearer rules, including on liability, for AI harms to remove uncertainties and promote adoption; (2) consider approaches to restrict or prevent certain “red line” AI uses; (3) require or promote the disclosure of key information about some types of AI systems; (4) ensure risk management procedures are followed throughout the lifecycle of AI systems that may pose a high risk; (5) mitigate competitive race dynamics in AI development and deployment that could limit fair competition and result in harms, including through international co-operation; (6) invest in research on AI safety and trustworthiness approaches, including AI alignment, capability evaluations, interpretability, explainability and transparency; (7) facilitate educational, retraining and reskilling opportunities to help address labour market disruptions and the growing need for AI skills; (8) empower stakeholders and society to help build trust and reinforce democracy; (9) mitigate excessive power concentration; and (10) take targeted actions to advance specific future AI benefits.

14 November 2024Antitrust & CompetitionData Privacy & Protection
↗ Link available
Other✓ Official

UNESCO, OECD and G7 release toolkit for AI in the public sector

The UNESCO and OECD, with contributions from the G7, have developed a toolkit for policymakers and public sector leaders to translate principles for safe, secure, and trustworthy AI into actionable policies. Key messages of the toolkit are: (1) establish clear strategic objectives and action plans in line with expected benefits; (2) include the voices of users in shaping strategies and implementation; (3) overcome siloed structures in government for effective governance; (4) establish robust frameworks for the responsible use of AI; (5) improve scalability and replicability of successful AI initiatives; (6) enable a more systematic use of AI in and by the public sector; and (7) adopt an incremental and experimental approach to the deployment and use of AI in and by the public sector.

15 October 2024Generative AICybersecurityData Privacy & Protection
↗ Link available
Other✓ Official

OECD and UN announce next steps in collaboration on AI

The OECD Deputy Secretary-General Ulrik (Vestergaard Knudsen) and the UN Secretary-General’s Envoy on Technology, Under-Secretary-General (Amandeep Singh Gill) have announced a new enhanced collaboration between the UN and the OECD on global AI governance. The two organisations will leverage their respective networks, convening platforms and ongoing work on AI policy and governance to support their member states and other stakeholders in their efforts to foster a globally inclusive approach to AI. Further deliverables have yet to be unveiled but it is announced that this collaboration "will focus on regular science and evidence-based AI risk and opportunity assessments".

22 September 2024Generative AIData Privacy & ProtectionCybersecurity
↗ Link available
International Agreement

SDAIA and OECD sign MoU to enhance AI incident monitoring in the Middle East

It has been reported that the Saudi Data and AI Authority (SDAIA) has signed a memorandum of understanding (MoU) with the Organization for Economic Co-operation and Development (OECD). This strategic partnership aims to strengthen AI incident monitoring in Middle Eastern countries and improve the tracking of AI developments by implementing the OECD AI Incidents Monitor to track data in the Arabic language.

11 September 2024Data Privacy & ProtectionCybersecurity
↗ Link availableSecondary source
Standard / Framework

AI Standards Hub and OECD collaborate to enhance OECD Catalogue of Tools and Metrics for Trustworthy AI

The AI Standards Hub (Hub) has collaborated with the OECD to enhance the OECD Catalogue of Tools and Metrics for Trustworthy AI with AI-specific standards from the AI Standards Hub’s Standards Database. The Hub will continue to enrich the OECD Catalogue of Tools and Metrics as new standards become available. The Hub is a partnership between The Alan Turing Institute (ATI), the British Standards Institution (BSI) and the National Physical Laboratory (NPL).

9 September 2024Generative AIData Privacy & Protection
↗ Link availableSecondary source
National Strategy

OECD opens consultation on risk thresholds for advanced AI systems

The OECD has collaborated with various stakeholders to explore the potential approaches, opportunities, and limitations of establishing risk thresholds for advanced AI systems. The consultation will explore questions on: (1) relevant publications and resources on AI risk thresholds, (2) appropriateness of compute power-based AI risk thresholds, (3) value and nature of alternative AI risk thresholds, (4) strategies for identifying, setting, and measuring AI risk thresholds, (5) requirements for systems exceeding AI risk thresholds, (6) considerations for OECD and collaborators in designing and implementing AI risk thresholds. The consultation closes on 10 September 2024.

26 July 2024National Strategy
↗ Link available
Law / Act

OECD opens consultation into application of Hiroshima Process Code of Conduct

The Organisation for Economic Co-operation and Development (OECD) has opened the public consultation on the application of the Hiroshima Process International Code of Conduct (Code of Conduct) for organisations developing advanced AI systems. Key points of the consultation include: (1) assessing how organisations align with the risk-based Code of Conduct for safe, secure, and trustworthy AI, (2) gathering input on the Code of Conduct's focus on high-level principles for organisations developing advanced AI systems like foundation models and generative AI, and (3) collecting feedback on the Code of Conduct's content, including risk identification, evaluation, and management, transparency reporting, content authentication, international interoperability, and standards and protections for personal data. The consultation will close on 6 September 2024.

22 July 2024Generative AIData Privacy & ProtectionCybersecurity
↗ Link availableSecondary source
Other✓ Official

OECD updates AI Principles to stay abreast of rapid technological developments

The OECD has adopted revisions to the landmark OECD Principles on AI. The updated Principles include (1) a more direct address of AI-associated challenges involving privacy, (2) intellectual property rights, (3) safety, (4) information integrity, and (5) a general scope ensuring applicability to AI developments worldwide.

3 May 2024Data Privacy & ProtectionCopyright & IpCybersecurity
↗ Link available
Other✓ Official

OECD AI Principles (2019)

The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. Adopted in May 2019 and updated in May 2024, they set standards for AI that are practical and flexible enough to stand the test of time. Values-based principles include (1) promoting inclusive growth, sustainable development, and well-being, (2) respecting human rights and democratic values, (3) ensuring transparency and explainability, (4) robustness, security and safety, and (5) fostering accountability and responsibility. Recommendations for policymakers include (1) investing in AI research and development, (2) fostering an inclusive AI-enabling ecosystem, (3) sharping an enabling interoperable governance and policy environment for AI, (4) building human capacity and preparing for labor market transition, and (5) international cooperation for trustworthy AI.

FRAMEWORKSafe & Responsible AiLabor & Workforce
↗ Link available