Principles for the use of AI

Principles for the use of AI

The integration of Artificial Intelligence requires a clear framework to guide its use across all domains. These principles provide a shared reference to ensure that the use of AI at the University of the Basque Country (UPV/EHU) aligns with institutional values and contributes to the development of a fair, safe, and socially committed academic community.

The goal is not to regulate the technology itself, but to ensure that its integration respects human dignity, preserves academic integrity, and strengthens trust in our educational and research processes.

The following principles establish the foundations for a responsible, transparent, and socially oriented use of generative AI within our university.

1. Equity

The use of generative AI must be based on the principle of equity, ensuring that all members of the university community have access without discrimination. It is essential to avoid platforms or tools that may produce biased or discriminatory outputs.

2. Ethics

All uses and developments of generative AI must be grounded in the ethical principles recognised by the university. This entails acting responsibly, transparently, and justly across all stages: design, implementation, and application. It is crucial to ensure that AI is not used for purposes that could undermine human dignity, promote discrimination, manipulate information, or compromise privacy.

3. Confidentiality

The use of generative AI must strictly respect the protection and privacy of personal and institutional data. Sensitive information—such as personal identifiers, financial data, private contact details, or materials that might compromise security—should not be input into AI systems.

4. Academic honesty

Dishonesty in the use of generative AI can seriously undermine learning processes and, in particular, assessment. Therefore, its use must be explicitly declared in academic work and projects, especially bachelor's theses, master's theses, and doctoral dissertations. It is essential to follow institutional recommendations on how to indicate AI use in academic documents.

5. Transparency

Whenever possible, generative AI developments used in research should be transparent, making algorithms, neural models, and training datasets publicly available. Transparency strengthens trust and scientific reproducibility.

6. Training

Responsible use of generative AI requires adequate training. The university community should have access to progressive and cross-disciplinary instruction that incorporates the technical, social, and ethical dimensions of these technologies. Digital literacy is essential for safe and effective use.

7. Open collaboration

Generative AI should foster a culture of collaboration among users, teaching staff, researchers, and developers. Sharing knowledge and responsible practices improves the quality of academic use and promotes collective innovation.

8. Oversight

The use of generative AI must be accompanied by human oversight. Important decisions should not be left fully automated. Those responsible must have the competencies required to review, validate, and correct AI-generated outputs.

9. Sustainability

The development and use of generative AI must consider its environmental and energy impact. The university should promote solutions aligned with the Sustainable Development Goals, supporting responsible practices that reduce ecological footprints and promote social equity and inclusion.

10. Social contribution

Solutions based on generative AI must aim at the common good. Their design and application should help address social challenges, improve well-being, and strengthen institutions. AI should not be understood merely as a technological or commercial tool, but as a resource that serves social development and shared knowledge.