AI hallucinations
Hallucinations are one of the most characteristic and problematic phenomena of generative AI. They occur when the model produces false, unverifiable, or imprecise information but presents it with clear, convincing, and seemingly well-founded wording. This makes it difficult for a non-expert user to distinguish between correct and fabricated responses.
AI hallucinates because its goal is not to verify information, but to predict the most probable phrase according to its training.
When it lacks sufficient information, when the prompt is ambiguous, or when the model infers more than it actually knows, it fills the gaps by inventing details that appear reasonable.
Common examples in academic contexts:
- Bibliographic references that do not exist, often with fabricated DOIs
- Invented laws, regulations, or protocols
- Attributing theories or concepts to incorrect authors
- Explanations that mix incompatible ideas
- Figures, percentages, or dates generated without any source
- Definitions that are formally correct but conceptually wrong
Hallucinations do not imply bad intent or an isolated technical error: they are inherent to how AI generates content.