Guarantees for the Use of AI under the EU AI Act
The integration of Artificial Intelligence (AI) tools in the university context is not merely a technological or pedagogical matter, but also a legal, ethical and organisational one.
The European Union Artificial Intelligence Act (EU AI Act) establishes a common legal framework that introduces differentiated obligations according to the level of risk posed by AI systems and their potential impact on individuals’ fundamental rights.
Within this risk-based approach, higher education is explicitly addressed. Certain uses of AI in the university context —such as those related to student admission, learning assessment, allocation of educational levels or pathways, or the monitoring of behaviour in assessment contexts— may be regarded as uses of particular sensitivity. These uses entail heightened requirements in terms of transparency, human oversight, quality, traceability and accountability.
The EU AI Act approach: risk-based regulation
The European Union Artificial Intelligence Act (EU AI Act) does not regulate all AI systems in the same way. Its core principle is a risk-based approach, under which AI uses are classified according to their potential impact on individuals, their fundamental rights and their life opportunities.
From this perspective, the key issue is not the technology itself, but the purpose for which it is used and the consequences it may generate. The same AI system may be considered low risk in one context and high risk in another, depending on whether it influences decisions that are significant for individuals.
In simplified terms, the EU AI Act distinguishes between:
- Prohibited uses, due to violations of fundamental rights.
- High-risk uses, permitted only under strict conditions.
- Limited-risk uses, subject to transparency obligations.
- Minimal-risk uses, with no additional specific obligations.
Most common applications of generative AI in teaching, learning, research or university management are not automatically classified as high risk. However, the regulation identifies particularly sensitive domains, among which education and training are explicitly included when AI influences decisions that may significantly affect individuals’ academic or professional trajectories.