Breadcrumb

General Criteria and Safeguards for Use

General Criteria and Safeguards for Use

Transparency in the Declaration of the Use of Artificial Intelligence

It is recommended that there be clarity, within the context of academic activities, regarding the conditions governing the use of Artificial Intelligence tools. In this regard, it is advisable to include guidance on whether their use is permitted, as well as on the transparency guidelines to be applied when they are used.

This recommendation is particularly relevant in the case of:

  • Coursework assignments
  • Final Degree Projects (FDP)
  • Final Master’s Projects (FMP)
  • Doctoral theses

In order to promote a consistent and comprehensible practice, such guidance may refer to the institutional formats, procedures or templates that may be available for declaring the use of Artificial Intelligence systems. (Learn more)

The absence of clear information regarding the use of these systems may give rise to uncertainty concerning authorship and the process by which work has been produced. For this reason, it is recommended that actions be guided by principles of transparency and academic responsibility.

Academic Integrity and Authorship

The use of Artificial Intelligence systems within the university context does not alter or replace the principle of personal academic authorship, which continues to rest with the individual responsible for the work, activity or output submitted.

From the perspective of academic integrity, it is essential that the use of Artificial Intelligence systems be conducted in a transparent manner. In this regard, it is considered good practice to explicitly declare the use of AI systems in the production of content and not to present as one’s own any content generated wholly or partially through such systems without such declaration.

A lack of transparency in the use of these systems may conflict with the principles of honesty and academic integrity that govern university activity. Consequently, their use should be guided by responsibility and critical judgement.

Academic Assessment and Grading

The use of Artificial Intelligence systems in academic assessment processes may be considered as a support resource for certain auxiliary tasks, provided that their application takes place under effective human supervision. Such supervision entails the genuine capacity to review, interpret, amend and, where appropriate, disregard the results generated.

From the perspective of responsible use, it is particularly important that final marking, the allocation of scores and the grading of students remain the responsibility of academic staff, thereby avoiding the delegation of these decisions to automated systems.

Likewise, it is advisable to avoid the design of assessment processes in which, either directly or indirectly, the final academic decision is conditioned by Artificial Intelligence systems or based on automatically generated outputs. In this respect, the use of AI should not replace the professional judgement or pedagogical reasoning that underpin the assessment of learning.

Uses of Particular Sensitivity in the Application of Artificial Intelligence Systems

The use of Artificial Intelligence systems requires special consideration when it intervenes in processes that may have a direct impact on individuals’ rights, opportunities or academic and professional trajectories.

In such contexts, it is especially important to act with caution, promote effective human oversight, and attend to principles such as transparency, responsibility and the prevention of potential biases.

For guidance purposes, areas of particular sensitivity are considered to be those in which Artificial Intelligence systems may intervene, including:

  • Admission or access to study programmes
  • Assessment of learning or academic performance
  • Allocation of levels, pathways, support measures or educational classifications
  • Supervision or monitoring systems in assessed activities, including proctoring environments
  • Decisions with a direct impact on academic rights or opportunities

In these areas, it is recommended to exercise heightened caution and to ensure that relevant decisions are understood, reviewed and assumed by responsible individuals, avoiding automated processes that may limit equity or accountability.