AI and high-risk uses
In the educational domain, the EU AI Act considers potentially sensitive those AI systems that are used to support or automate decisions with significant consequences for students. This includes, among others, the following cases:
-
Systems that determine or condition access to or admission to university studies.
-
Systems that assess learning, academic performance or student progression when the outcome has academic or administrative effects.
-
Systems that assign levels, pathways, support measures or educational classifications.
-
Systems intended to monitor or detect behaviour in assessment contexts, such as examinations or official tests (e.g. automated proctoring).
In these cases, the risk does not stem from the mere use of AI, but from its capacity to influence decisions that affect rights, educational opportunities, fairness or equal treatment. For this reason, the EU AI Act requires such uses to be subject to enhanced safeguards, regardless of whether the technology is developed internally or procured from external providers.
If AI may influence a decision that affects a person’s assessment, rights or academic opportunities, it should not be used automatically or in isolation.
When an AI system is used in contexts of particular sensitivity, its application requires additional conditions that strengthen the protection of individuals and institutional accountability. These include, in particular:
- Effective human oversight, ensuring that AI does not replace academic or administrative decision-making.
- Transparency, clearly informing affected individuals that AI is being used and for what purpose.
- Quality, reliability and error control, especially in assessment contexts.
- Prevention of bias and discrimination, with particular attention to fairness and equity.
- Traceability and documentation, enabling a review of how the system has been used.
- Clear accountability, which always rests with the institution and the persons responsible for the process, not with the tool itself.
Does this use of AI require particular caution?
The following checklist helps identify uses of AI that should not be implemented autonomously and that require prior reflection on their legal, ethical and organisational suitability, in line with the risk-based approach of the EU AI Act.
If you answer “yes” to any of these questions, the use should not be applied in isolation.
Self-assessment checklist
1. Does the AI intervene in decisions that directly affect students’ academic pathways?
2. Does the AI correct, score or assess students, wholly or partially?
3. Does the result generated by the AI have real consequences for an individual?
4. Is the AI used to monitor or supervise during exams or assessed tests?
5. Are you unable to clearly and understandably explain how the AI produces the result?
6. Is there a risk of bias, discrimination or unequal treatment?
7. Are personal, academic or sensitive data introduced into AI tools external to the university?
8. Is the final decision, in practice, determined by the AI?