Breadcrumb

Etika eta Printzipioak

Ethics and Principles

Recommendation on the Ethics of Artificial Intelligence

UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2022) presents a comprehensive set of foundational principles designed to guide the responsible development and deployment of AI technologies worldwide. The text underscores the protection of human rights, the promotion of transparency and fairness, the ethical management of data, and the crucial role of human oversight in automated decision-making processes. Additionally, it outlines extensive policy action areas aimed at helping governments and institutions translate ethical principles into practice across diverse sectors—including education, healthcare, environmental management, governance, and gender equality. Its purpose is to provide a global framework that supports the creation of AI systems that enhance social wellbeing, mitigate risks, and contribute to more equitable and inclusive societies.

Ethical Impact Assessment: A Tool of the Recommendation on the Ethics of Artificial Intelligence

UNESCO’s ‘Ethical Impact Assessment’ tool (2023) provides a practical framework within the Recommendation on the Ethics of Artificial Intelligence (AI) to ensure that AI systems are developed in accordance with strong ethical principles. Its dual purpose is to assess whether algorithms align with the values and guidelines set out in the Recommendation, and to enhance transparency by making information about how AI systems are designed, deployed, and governed accessible to the public. The assessment covers the full AI lifecycle and includes both ex-ante and ex-post requirements. Key considerations include data quality and representativeness, the diversity of development teams, the robustness and auditability of algorithms, and the integration of ethical decision points throughout all stages of development. Suitable for both public and private sectors, the tool supports the creation of ethical AI, effective risk management, compliance with international standards, and meaningful engagement with stakeholders.

Readiness Assessment Methodology: A Tool of the Recommendation on the Ethics of Artificial Intelligence

UNESCO’s ‘Readiness Assessment Methodology’ offers a comprehensive framework for countries and organizations to evaluate their preparedness to implement the Recommendation on the Ethics of Artificial Intelligence (AI). The methodology assesses critical dimensions such as AI governance, technological infrastructure, human capacities, responsible data management, and the maturity of regulatory frameworks, providing a clear picture of how equipped an environment is to deploy ethical AI. It enables detailed diagnostics that highlight strengths, gaps, and areas for improvement, while also offering practical guidance to support the development of effective public policies and trustworthy AI systems. Furthermore, it outlines strategic action areas that help guide investment, strengthen institutions, and foster an AI ecosystem grounded in human rights, transparency, safety, and societal wellbeing.

OECD AI Principles

The OECD AI Principles are the first intergovernmental standard developed to promote innovative and trustworthy Artificial Intelligence (AI) that upholds human rights and democratic values. Adopted in 2019 and updated in 2024, they consist of five core value-based principles and five complementary recommendations that offer practical and flexible guidance for policymakers and AI stakeholders. These principles emphasize transparency, safety, fairness, accountability, and social wellbeing as essential foundations for the development, deployment, and oversight of AI systems. Their aim is to provide a robust global framework that supports countries in fostering responsible and trustworthy AI aligned with international norms, while addressing emerging ethical, technical, and societal challenges associated with rapidly evolving AI technologies.

Policies, data and analysis for trustworthy artificial intelligence

OECD.AI and GPAI provide robust policies, data, and analytical resources to support the development of safe, trustworthy, and human-centred Artificial Intelligence (AI). Through comprehensive datasets, policy evaluations, comparative indicators, and in-depth research, these platforms equip governments, researchers, and AI stakeholders with the evidence needed to understand emerging trends and address the ethical, social, and economic impacts of AI. Their collaborative work strengthens international cooperation, fosters knowledge sharing, and offers practical tools to help countries design effective regulations and national strategies for responsible and transparent AI. Together, they serve as a global reference point for advancing AI systems that deliver meaningful benefits for society.

Ethics Guidelines for Trustworthy AI (High-Level Expert Group on AI, European Commission)

The ‘Ethics Guidelines for Trustworthy AI’, developed by the European Commission’s High-Level Expert Group on AI, provide a foundational framework for ensuring that Artificial Intelligence (AI) is designed and deployed in an ethical, lawful, and technically robust manner. The guidelines define three overarching requirements for trustworthy AI—lawfulness, ethics, and robustness—and outline seven key principles that operationalize these requirements: respect for human autonomy, prevention of harm, fairness, transparency, technical excellence, security, and accountability. They also offer practical recommendations and assessment checklists to support the implementation of these principles throughout the entire AI lifecycle. Widely recognized across Europe and internationally, these guidelines serve as a key reference for policymakers, organizations, and practitioners seeking to develop human-centred, rights-based and trustworthy AI systems.

Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Educators

The ‘Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Educators’ provide a comprehensive framework for the responsible integration of AI technologies and the use of educational data in learning environments. The guidelines outline key principles to ensure that AI is deployed safely, transparently, and in full respect of learners’ rights. They offer educators practical support for understanding and evaluating AI-based tools, emphasizing the importance of safeguarding privacy, ensuring data security, and preventing bias or harmful discrimination. In addition, the document presents actionable recommendations for using AI to enhance teaching and learning processes while protecting teacher autonomy and promoting equity and inclusiveness. The ultimate goal is to guide education systems in adopting ethical and trustworthy AI practices that genuinely support student wellbeing and high-quality learning.

Living Guidelines on the Responsible Use of Generative AI in Research

The ‘Living Guidelines on the Responsible Use of Generative AI in Research’, published by the European Commission, offer a dynamic and evolving framework to support researchers and institutions in adopting Generative Artificial Intelligence (Generative AI) in an ethical, secure, and scientifically responsible manner. The guidelines provide practical recommendations for identifying and managing risks, ensuring data integrity, maintaining traceability and transparency of generated content, and upholding accountability across all stages of the research process. They address crucial aspects such as human oversight, disclosure and citation of AI use, safeguarding sensitive information, and evaluating the ethical and societal implications of generative AI tools. As ‘living’ guidelines, they are regularly updated to keep pace with technological advances and emerging challenges, serving as a vital resource for promoting innovative yet ethically grounded research practices.