Limitations and errors of Artificial Intelligence
Generative Artificial Intelligence offers new opportunities for teaching, research, and content creation, but it also presents structural limitations that must be understood before integrating it into academic work.
These limitations are not occasional malfunctions or exceptions: they are part of its internal operation and affect any tool based on generative models.
AI can produce incorrect information that appears rigorous, a phenomenon known as hallucination. It may also reflect the biases, inequalities, and gaps present in the data on which it was trained. These two categories —errors in content and distortions in representation— explain why human review remains essential.
Furthermore, current models do not understand the world as humans do; they do not access knowledge through comprehension but through statistical patterns. This means that:
- They cannot guarantee the accuracy of what they generate
- They do not detect when they are wrong
- They do not distinguish between an established fact and a plausible assumption
Recognising these limitations enables the development of a critical and responsible attitude, essential in a university environment where precision, truthfulness, and academic quality are indispensable.