Best Practices for the Responsible Use of AI
The use of artificial intelligence tools at university can bring significant benefits to teaching, learning, and research. However, there are particularly sensitive situations that require careful and supervised use, aligned with the principles of academic integrity, transparency, and respect for fundamental rights.
Assessment and student grading
AI may be used to support tasks such as marking, reviewing, or improving texts, but it must not decide or assign grades autonomously. Academic assessment is a teaching responsibility and must always be carried out under human supervision, especially when it has formative or administrative consequences for students.
AI-based proctoring in exams
The use of AI-based proctoring systems is particularly sensitive. Such systems may affect privacy, the presumption of honesty, and other fundamental rights; therefore, they should not be used automatically or on a widespread basis. In any case, their implementation requires a prior and context-specific assessment of their legal, ethical, and organisational suitability, taking into account the specific assessment setting.
Use of AI in undergraduate and postgraduate dissertations or doctoral theses
The use of generative AI as a support tool—such as for structuring ideas, improving writing style, or providing translation support—is not, in itself, considered a particularly sensitive practice. However, concealing its use or delegating academic authorship to AI undermines the principles of academic integrity. Therefore, any use must be disclosed in accordance with the university’s guidelines.