Responsible use of AI: fraud and plagiarism
Generative AI does not copy existing works: it creates new content based on patterns.
Generative AI does not copy existing works; it creates new content based on patterns.
Even so, issues of authorship may arise when students present such content as their own without declaring its use.
What matters is not whether AI generates content, but whether its contribution is concealed.
Using generative AI is NOT plagiarism in academic terms
AI does not copy texts or ideas from specific authors; its use does not fit the normative definition of plagiarism; using it is legitimate as long as it is declared and the student maintains conceptual authorship of the work.
Using AI is not plagiarism. Not declaring it can be fraud.
AI use may be considered fraud when:
- It is used in an assessed assignment and not declared
- AI-generated text (fully or partially) is submitted as the student's own
- AI replaces the student’s reasoning or personal contribution
- Translations, explanations, or summaries generated by AI are used without indicating it
- The instructor’s guidelines on AI use are not followed
The problem is the lack of transparency —not the tool itself— which undermines academic integrity.
Associated risks: originality, authorship, and detection
Current challenges include:
- A high rate of false positives in AI detectors
- Unreliable instructor intuition to distinguish generated texts
- Tools designed to hide AI involvement
- Potential errors, fabrications, or conceptual inconsistencies in AI-generated texts
Banning generative AI is not practical: the key is educating for responsible use.
Recommendations to avoid authorship issues
- Always be transparent.
- Critically review any AI-generated content.
- Do not delegate conceptual authorship to the tool.
- Follow instructor guidelines.
- Do not use AI to replace essential learning processes.
- Do not enter personal data or copyrighted material.