The OECD calls for a trustworthy AI, emphasizing information integrity and transparency, interoperability, sustainability and responsibility.
Don’t miss our content
SubscribeOn May 3, the OECD updated its Recommendation on Artificial Intelligence (the “Recommendation"). The document, initially adopted in 2019, proposes implementing a set of principles for responsible and trustworthy AI management. It is a non-binding legal instrument but represents political commitment from the adhering countries. Since 2019, the principles in the Recommendation have been embraced by the G20, the European Union, Japan, the USA and other jurisdictions, and have been applied to the work of the United Nations and the EU-US Trade and Technology Council.
The OECD’s AI principles are based on values of respect for human rights and democratic values, inclusion, diversity, fairness, innovation and wellbeing. These values translate into five principles for developing and using AI systems:
- inclusive growth, sustainable development and wellbeing;
- respect for the rule of law, human rights and democratic values, including fairness and privacy;
- transparency and explainability;
- robustness and security; and
- accountability.
The document also contains recommendations for governments and international organizations on how to promote and facilitate the above principles, addressing aspects such as investment, research, education, data access, cooperation, monitoring and accountability.
In November 2023, the OECD Council approved a change to the Recommendation to update its definition of an “AI System,” a definition that has been adopted in the recently approved EU AI Act. In May 2024, the OECD Council made a new revision to the Recommendation, with a broader scope.
The 2024 revision
Due to the accelerated rate of technological advances in the AI field, especially regarding general-purpose and generative AI, the OECD has updated the Recommendation to keep up with the new challenges and opportunities posed by these technologies. Therefore, in May 2024, the OECD’s Ministerial Council approved a revision that introduces significant changes.
The revision is centered on strengthening and clarifying some key aspects, such as:
- AI system safety, introducing mechanisms and safeguards to manage AI systems that risk causing undue harm or exhibit undesired behavior, as well as the ability to override, repair and decommission these systems.
- Information integrity, reflecting the growing importance of addressing misinformation and disinformation in the context of generative AI, which can create false or misleading content with great realism and plausibility.
- Responsible business conduct, emphasizing the need for organizations to develop, implement and use AI systems in an ethical, legal and socially responsible way, cooperating with different actors involved and respecting the rights of people affected by AI.
- Transparency and responsible disclosure, clarifying what it means to adequately report on the objectives, characteristics, limitations, and risks of the AI systems, as well as on the sources, quality and use of the data fueling them.
- Environmental sustainability, making explicit reference to this, due to the growing concern over AI’s impact on the environment and natural resources, and the need to minimize and compensate for it.
- The need for jurisdictions to work together, underscoring the need to promote interoperable governance and policy environments for AI that facilitate innovation and international cooperation.
With these revisions, the OECD shows its capacity to adapt and lead in the digital political environment, and reaffirms its commitment to AI in an innovative, trustworthy, and respectful way regarding human rights and democratic values.
Don’t miss our content
Subscribe