General Criteria and Safeguards for Use
Transparency in the Declaration of the Use of Artificial Intelligence
This recommendation is particularly relevant in the case of:
- Coursework assignments
- Final Degree Projects (FDP)
- Final Master’s Projects (FMP)
- Doctoral theses
In order to promote a consistent and comprehensible practice, such guidance may refer to the institutional formats, procedures or templates that may be available for declaring the use of Artificial Intelligence systems. (Learn more)
The absence of clear information regarding the use of these systems may give rise to uncertainty concerning authorship and the process by which work has been produced. For this reason, it is recommended that actions be guided by principles of transparency and academic responsibility.
Academic Integrity and Authorship
From the perspective of academic integrity, it is essential that the use of Artificial Intelligence systems be conducted in a transparent manner. In this regard, it is considered good practice to explicitly declare the use of AI systems in the production of content and not to present as one’s own any content generated wholly or partially through such systems without such declaration.
A lack of transparency in the use of these systems may conflict with the principles of honesty and academic integrity that govern university activity. Consequently, their use should be guided by responsibility and critical judgement.
Academic Assessment and Grading
From the perspective of responsible use, it is particularly important that final marking, the allocation of scores and the grading of students remain the responsibility of academic staff, thereby avoiding the delegation of these decisions to automated systems.
Likewise, it is advisable to avoid the design of assessment processes in which, either directly or indirectly, the final academic decision is conditioned by Artificial Intelligence systems or based on automatically generated outputs. In this respect, the use of AI should not replace the professional judgement or pedagogical reasoning that underpin the assessment of learning.
Uses of Particular Sensitivity in the Application of Artificial Intelligence Systems
In such contexts, it is especially important to act with caution, promote effective human oversight, and attend to principles such as transparency, responsibility and the prevention of potential biases.
For guidance purposes, areas of particular sensitivity are considered to be those in which Artificial Intelligence systems may intervene, including:
- Admission or access to study programmes
- Assessment of learning or academic performance
- Allocation of levels, pathways, support measures or educational classifications
- Supervision or monitoring systems in assessed activities, including proctoring environments
- Decisions with a direct impact on academic rights or opportunities
In these areas, it is recommended to exercise heightened caution and to ensure that relevant decisions are understood, reviewed and assumed by responsible individuals, avoiding automated processes that may limit equity or accountability.