AI and scientific validity
When using AI in academic contexts, it is essential to distinguish between two types of tools:
- Generative AI tools (ChatGPT, Gemini, Claude, etc.), which produce new text.
- AI-based information retrieval tools (Consensus, Perplexity, etc.), which search for and summarise real academic articles.
In both cases, AI can be helpful; however, it should never be considered a primary scientific source.
Generative AI is not a scientific source
Generative tools can be useful for drafting, structuring, or understanding ideas, but they do not produce scientific evidence. This is because:
- They do not verify the data they generate.
- They may confuse concepts or invent information (hallucinations).
- They do not clearly show where their claims come from.
- They generate probabilistic outputs, not necessarily based on verified academic sources.
For this reason, AI-generated content cannot be used as the scientific basis of an academic work.
Even if the tool is cited, such citation does not replace the obligation to reference real authors, nor does it justify supporting a claim solely on the basis of AI-generated text.
Even if the tool is cited, such citation does not replace the obligation to reference real authors, nor does it justify supporting a claim solely on the basis of AI-generated text.
Tools that retrieve real academic articles: useful, but not sufficient
There are AI-based tools designed to locate and summarise scientific literature (for example, Consensus or Perplexity). These tools can be useful because:
- They help to identify relevant literature.
- They provide quick summaries of findings.
However, they also have important limitations:
- They do not replace a critical reading of the original article.
- They may misinterpret results.
- In some cases, they may even fabricate citations if the model fails.
These tools should therefore be understood as search tools, not as scientific sources.
As a result, what must always be cited are the original articles, not the AI tool; and no claim should ever be included based solely on an automated summary.
As a result, what must always be cited are the original articles, not the AI tool; and no claim should ever be included based solely on an automated summary.
AI can help to search for, summarise, or explore literature; however, it is not scientific evidence in itself, and it cannot replace direct consultation of real academic sources.