What are the OECD AI Principles?
The OECD Principles on Artificial Intelligence were adopted in May 2019, making them the first intergovernmental standard on AI. They were subsequently adopted by the G20 and have influenced AI policy globally, including the EU AI Act and US national AI initiatives.
countries have adopted the OECD AI Principles, representing the world's major economies
The Five Principles for Trustworthy AI
1. Inclusive Growth, Sustainable Development, and Well-being
AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being. It should augment human capabilities and enhance creativity.
In Practice:
- Consider who benefits and who might be harmed by AI systems
- Assess environmental impact of AI development and deployment
- Design AI to enhance rather than replace human capabilities
- Evaluate societal implications beyond immediate business benefits
2. Human-Centered Values and Fairness
AI systems should respect human dignity, privacy, and autonomy. They should be designed to uphold fairness and avoid discrimination.
In Practice:
- Embed privacy by design in AI systems
- Test for and mitigate bias across protected characteristics
- Ensure AI respects user autonomy and choice
- Protect fundamental rights and democratic values
3. Transparency and Explainability
There should be transparency about AI systems so people understand when they are interacting with AI and can challenge outcomes. Organizations should disclose information about AI systems in an understandable way.
In Practice:
- Disclose when AI is being used in decision-making
- Explain AI decisions in terms users can understand
- Document AI system capabilities and limitations
- Provide mechanisms for users to inquire about AI decisions
4. Robustness, Security, and Safety
AI systems should function robustly, securely, and safely throughout their lifecycle. They should not pose unreasonable safety risks and should be resilient against security threats.
In Practice:
- Test AI systems thoroughly before deployment
- Implement security controls against adversarial attacks
- Monitor AI performance and address degradation
- Have fallback mechanisms when AI fails
5. Accountability
Organizations and individuals developing, deploying, or operating AI systems should be accountable for their proper functioning in accordance with the above principles.
In Practice:
- Establish clear ownership and responsibility for AI systems
- Implement governance mechanisms for AI oversight
- Maintain audit trails for AI decisions
- Provide redress mechanisms for those affected by AI
OECD Recommendations for AI Policy
Beyond the principles, the OECD provides recommendations for national policies and international cooperation:
- Investing in AI R&D: Support long-term research and development
- Fostering an AI Ecosystem: Create enabling conditions for AI innovation
- Preparing for Labor Transition: Support workers affected by AI transformation
- International Cooperation: Work across borders on AI governance
- Policy Development: Develop flexible and adaptive AI policies
How OECD Principles Influence Regulations
The OECD Principles have shaped AI regulations worldwide:
- EU AI Act: Requirements for high-risk AI align with OECD trustworthiness characteristics
- US National AI Initiative: Explicitly references OECD Principles
- NIST AI RMF: Trustworthiness characteristics map to OECD Principles
- G7/G20 Statements: Built on OECD framework
- National AI Strategies: Many countries reference OECD as foundation
Implementing OECD Principles
For Organizations
- Adopt AI principles that align with OECD framework
- Conduct AI impact assessments
- Establish AI ethics committees or review boards
- Train staff on responsible AI practices
- Document AI systems and decisions
- Create feedback mechanisms for affected parties
Assessment Framework
Evaluate your AI systems against each principle:
- Does this AI benefit people broadly, or only some?
- Does it respect privacy and avoid unfair discrimination?
- Can users understand when they're interacting with AI?
- Is it robust against failures and attacks?
- Is there clear accountability for outcomes?