FAQ
What is Artificial Intelligence (AI)?
Artificial Intelligence is a set of technologies capable of learning from data and performing tasks associated with human capabilities, such as interpreting information, supporting decision-making, or creating content.
What is the difference between “predictive” AI and “generative” AI?
-
Predictive AI: analyses data, identifies patterns, and classifies or predicts outcomes (e.g. recommendation systems or spam detection).
-
Generative AI: creates new content (text, images, audio, video, or code) based on what it has learned.
Does AI “think” like a human?
No. AI generates outputs based on probabilities, using learned patterns. As a result, it can make mistakes or generate false information that appears plausible (so-called hallucinations).
How can AI be useful in teaching?
-
Designing and structuring courses and teaching–learning sequences.
-
Creating presentations, explanatory texts, examples, and summaries.
-
Adapting materials to diverse learner profiles or specific needs.
-
Reviewing the style, clarity, and coherence of texts.
-
Producing multimedia materials (videos, podcasts, images, etc.).
-
Designing guides, tutorials, and activities for autonomous learning.
-
Searching for, classifying, and synthesising literature (with verification).
-
Supporting translations of non-confidential texts.
-
Generating assessment ideas or questions, always subject to teacher verification.
Which practices should be avoided in teaching?
-
Entering students’ personal data or confidential information.
-
sing outputs without reviewing or verifying them.
-
Delegating assessment or grading decisions to AI.
-
Presenting AI-generated materials as one’s own without disclosure.
-
Replacing critical thinking or active learning activities with automated tasks.
How can AI be useful in research?
-
Searching for and synthesising academic literature (organising, filtering, summarising).
-
Generating hypotheses and research questions based on existing literature.
-
Supporting methodological design (structure and proposals).
-
Assisting with data processing (extraction, organisation, simulations, or preliminary analyses).
-
Supporting academic writing (drafts and structural organisation).
-
Identifying new ideas and research gaps.
-
Producing dissemination materials (summaries, infographics).
-
Supporting translations of non-confidential texts.
Which practices should be avoided in research?
-
Entering personal, sensitive, or student-related data.
-
Uploading confidential documents (theses, manuscripts under review, proposals, etc.).
-
Incorporating unverified content (including references and data).
-
Failing to disclose the contribution of AI (authorship ambiguity).
-
Delegating statistical analysis or interpretation without human oversight.
-
Any use that compromises academic integrity in the research process.
How can AI support learning?
-
Searching for information and supplementary materials.
-
Generating initial ideas for assignments (as a starting point).
-
Organising information (summaries, concept maps, outlines).
-
Revising and editing texts.
-
Supporting self-assessment (exam-style practice exercises).
-
Learning languages and developing linguistic competences.
Which practices should students avoid?
-
Submitting AI-generated content as their own.
-
Using AI-generated content without declaring its use.
-
Using AI in exams or continuous assessment when it is explicitly prohibited.
How can AI be useful in management and administration?
-
Drafting and structuring documents.
-
Synthesising and explaining documentation (without sensitive data).
-
Translating non-confidential texts.
-
Creating templates, tables, and organising repetitive tasks.
-
Providing coherent responses in user support contexts (virtual assistants).
-
Supporting small-scale computing or technical tasks through code generation.
-
Organising data to support decision-making.
Which practices should be avoided by administrative and support staff?
-
Entering personal, academic, or confidential data.
-
Fully automating processes without human supervision.
-
Uses that compromise security, privacy, or data protection.
-
Using unverified outputs in official documents or institutional communications.
Can I enter personal data or confidential information into an AI system?
No. As a general rule, personal, sensitive, academic, or confidential institutional data must not be entered into AI systems.
Is using AI plagiarism?
Not necessarily. The problem arises when its use is concealed, when outputs are presented as one’s own, or when intellectual authorship is delegated to AI. Transparency is the key principle.
What does it mean to “declare” the use of AI, and how does this differ from “citing” it?
-
Declaring: informing that AI was used and for what purpose (mandatory if it contributed to the work).
-
Citing: formally referencing AI-generated text or outputs when they appear in the work (verbatim, translated, or adapted), in accordance with academic conventions.
Can AI be used as a “scientific source”?
No. AI can assist with searching, summarising, or organising information, but it does not replace academic articles, datasets, or primary scholarly sources. All information must be verified.
What are AI “hallucinations”?
Situations in which AI generates false or non-verifiable information that appears convincing (e.g. fabricated references, unsupported figures, or incorrect attributions).
How can errors and hallucinations be reduced?
-
Using clear prompts with explicit constraints (e.g. “do not invent data”).
-
Asking the system to indicate uncertainty.
-
Manually verifying key information (data, references, regulations).
-
Using AI as a structural support tool, not as a primary source.
Can AI exhibit bias?
Yes. AI may reproduce stereotypes or represent cultures, genders, languages, or contexts unevenly. It is advisable to explicitly request diversity, compare outputs, and review responses critically.
What is a prompt?
A prompt is the instruction given to an AI system: what you want, for whom, in what format, and under which constraints.
What makes a prompt “good”?
Clarity, contextual information, a defined objective, specified format, tone, level of detail, and constraints (e.g. “include two examples”, “avoid technical jargon”, “state uncertainty if unsure”).
Can I upload copyrighted materials (articles, books, teaching materials) to an AI system?
Generally, no—unless you have authorisation, an appropriate licence, or a legal basis. Prioritise your own materials or openly licensed resources (e.g. Creative Commons), and always ensure proper attribution.
What is the key principle summarising responsible AI use?
Human oversight + privacy + verification + transparency.
AI can be highly beneficial, but ultimate academic, ethical, and legal responsibility always lies with the user.
What is the general principle for AI use in academic contexts?
AI may support academic processes, but evaluative decisions and teaching responsibilities cannot be delegated to automated systems. Any use must remain under human supervision and aligned with the principles of academic integrity, transparency, and responsibility.
Can AI be used to mark assignments or assign grades?
AI may support tasks such as correction, review, or text improvement, but it must not autonomously decide or assign grades. Academic assessment is a teaching responsibility and must always be supervised by a human, particularly when formative or administrative consequences are involved.
Can AI-based proctoring be used during examinations?
The use of AI-based proctoring systems is particularly sensitive. Such technologies may affect privacy, the presumption of honesty, and other fundamental rights, and therefore should not be used automatically or indiscriminately. Any implementation requires a prior, context-specific legal, ethical, and organisational assessment.
Is the use of AI in undergraduate dissertations, master’s theses, or doctoral theses considered high risk?
Using generative AI as a support tool—for example, to organise ideas, improve writing style, or provide translation support—is not inherently considered a high-risk practice. However, concealing its use or delegating academic authorship to AI violates principles of academic integrity. Therefore, any use must be declared in accordance with institutional guidelines.