How to detect and reduce hallucinations
Detecting and reducing hallucinations is an essential digital competence. It involves knowing how to recognize when AI may be providing incorrect information and applying strategies to minimize such errors. In the university context, these skills are fundamental to ensuring the quality of teaching, learning, and research.
Why hallucinations are difficult to detect
Hallucinations can easily go unnoticed because AI writes with clarity, confidence, and coherence, even when the content is false. This appearance of rigor can generate a false sense of trust, especially in complex topics or areas unfamiliar to the user. Therefore, simply reading the response is not enough; it must be interpreted critically.
How to detect a hallucination
Hallucinations often manifest through recognizable signals:
- Excessive confidence in complex topics, without nuance or warnings.
- Overly precise data (dates, percentages, figures) that do not appear in any verifiable source.
- Non-existent bibliographic references or DOIs, or suspiciously generic titles.
- Internal inconsistencies or the mixing of incompatible theories and concepts.
- Significant changes when repeating the same question, indicating a lack of solid grounding.
- Information added that the user did not request, common when the model “fills in gaps.”
- Recognizing these signals helps to question the response and seek confirmation from reliable sources.
How to reduce hallucinations
Although they cannot be completely avoided, their frequency can be reduced through appropriate practices:
1. Formulate clear, specific prompts with explicit limits
“Do not invent data or authors. If you do not have verifiable information, state so.”
“Do not invent data or authors. If you do not have verifiable information, state so.”
2. Ask the AI to indicate its uncertainties
“Indicate which aspects of your response may not be fully reliable.”
“Indicate which aspects of your response may not be fully reliable.”
3. Request only real, verifiable sources
And manually review the references.
And manually review the references.
4. Compare multiple versions of the same answer
If contradictions appear, part of the content is likely fabricated.
If contradictions appear, part of the content is likely fabricated.
5. Manually verify key information
Especially in academic work or research.
Especially in academic work or research.
6. Avoid requesting exact figures, very recent data, or highly specialized information
These are critical points where hallucinations are more frequent.
These are critical points where hallucinations are more frequent.
7. Use AI as a structural aid, not as a primary source
It can help organize ideas or synthesize texts, but factual validity must always be confirmed through real sources.
It can help organize ideas or synthesize texts, but factual validity must always be confirmed through real sources.
Hallucinations are not an occasional flaw, but an inherent characteristic of generative AI. Managing them requires critical thinking, systematic verification, and a clear understanding of the model’s limitations.