AI and high-risk uses

AI and high-risk uses

In the educational domain, the EU AI Act considers potentially sensitive those AI systems that are used to support or automate decisions with significant consequences for students. This includes, among others, the following cases:

  • Systems that determine or condition access to or admission to university studies.

  • Systems that assess learning, academic performance or student progression when the outcome has academic or administrative effects.

  • Systems that assign levels, pathways, support measures or educational classifications.

  • Systems intended to monitor or detect behaviour in assessment contexts, such as examinations or official tests (e.g. automated proctoring).

In these cases, the risk does not stem from the mere use of AI, but from its capacity to influence decisions that affect rights, educational opportunities, fairness or equal treatment. For this reason, the EU AI Act requires such uses to be subject to enhanced safeguards, regardless of whether the technology is developed internally or procured from external providers.

If AI may influence a decision that affects a person’s assessment, rights or academic opportunities, it should not be used automatically or in isolation.

When an AI system is used in contexts of particular sensitivity, its application requires additional conditions that strengthen the protection of individuals and institutional accountability. These include, in particular:

  • Effective human oversight, ensuring that AI does not replace academic or administrative decision-making.
  • Transparency, clearly informing affected individuals that AI is being used and for what purpose.
  • Quality, reliability and error control, especially in assessment contexts.
  • Prevention of bias and discrimination, with particular attention to fairness and equity.
  • Traceability and documentation, enabling a review of how the system has been used.
  • Clear accountability, which always rests with the institution and the persons responsible for the process, not with the tool itself.

Does this use of AI require particular caution?

The following checklist helps identify uses of AI that should not be implemented autonomously and that require prior reflection on their legal, ethical and organisational suitability, in line with the risk-based approach of the EU AI Act.

If you answer “yes” to any of these questions, the use should not be applied in isolation.

Self-assessment checklist

1. Does the AI intervene in decisions that directly affect students’ academic pathways?

Examples: admission, grading, learning assessment, allocation of levels or pathways, granting academic support or disciplinary measures.
Why it matters: AI may influence decisions affecting rights, opportunities and academic outcomes.

2. Does the AI correct, score or assess students, wholly or partially?

Examples: automated marking of exercises, generation of grades, rankings, performance reports or assessment indicators.
Why it matters: evaluative judgement may shift from the human decision-maker to the tool.

3. Does the result generated by the AI have real consequences for an individual?

Examples: passing or failing, progressing or not in a degree programme, accessing or not certain educational opportunities.
Why it matters: the system’s effects are real and verifiable, not merely theoretical.

4. Is the AI used to monitor or supervise during exams or assessed tests?

Examples: automated proctoring, fraud detection or behavioural analysis during assessments.
Why it matters: it may affect fundamental rights such as privacy or the presumption of honesty.

5. Are you unable to clearly and understandably explain how the AI produces the result?

Examples: opaque models, unknown criteria, inability to justify a specific decision.
Why it matters: without explainability, there is no transparency or possibility of review.

6. Is there a risk of bias, discrimination or unequal treatment?

Examples: differing outcomes based on gender, language, origin, disability or socio-economic background.
Why it matters: AI may reproduce or amplify existing inequalities.

7. Are personal, academic or sensitive data introduced into AI tools external to the university?

Examples: commercial platforms, cloud services or models not directly controlled by the UPV/EHU.
Why it matters: this entails legal and data protection risks.

8. Is the final decision, in practice, determined by the AI?

Examples: automatic acceptance of the result, lack of a real possibility to modify it, merely formal review.
Why it matters: this results in a de facto automation of the decision.