What biases in AI are
Generative Artificial Intelligence tools can offer very useful results, but they can also reproduce the biases present in the data on which they were trained. Understanding these biases is essential for a critical, ethical, and responsible use within the university.
A bias is a systematic tendency that leads a model to provide responses that favour, exclude, or unevenly represent certain groups, ideas, or situations.
AI is not objective: it learns from data created by people and specific social contexts, so its responses reflect limitations, inequalities, and partial perspectives. This is why AI outputs must always be interpreted with a critical mindset.
Why biases appear
Incomplete or unrepresentative training data:
If certain languages, cultures, or social profiles dominate the dataset, AI will tend to reproduce that limited viewpoint.
If certain languages, cultures, or social profiles dominate the dataset, AI will tend to reproduce that limited viewpoint.
Patterns containing stereotypes or inequalities:
The model learns whatever appears in the data, without distinguishing whether it is appropriate or problematic.
The model learns whatever appears in the data, without distinguishing whether it is appropriate or problematic.
Insufficient or ambiguous information in the prompt:
When gaps exist, AI fills them with what is “most probable,” not what is most accurate.
When gaps exist, AI fills them with what is “most probable,” not what is most accurate.
User-introduced biases:
Providing biased examples or poorly formulated questions can lead to partial responses.
Providing biased examples or poorly formulated questions can lead to partial responses.
Overrepresentation of certain disciplines:
Fields such as computer science, health or economics appear more frequently in training data, resulting in more detailed responses than in humanities, education, or the arts.
Fields such as computer science, health or economics appear more frequently in training data, resulting in more detailed responses than in humanities, education, or the arts.
Common ways biases appear
Gender bias
“Nursing assistant” → woman. “Doctor” → man.
“Nursing assistant” → woman. “Doctor” → man.
Cultural bias
Assuming Western norms as universal.
Assuming Western norms as universal.
Linguistic bias
Higher response quality in English than in Basque or Spanish.
Higher response quality in English than in Basque or Spanish.
Socioeconomic bias
Assuming universal access to technology, mobility, or resources.
Assuming universal access to technology, mobility, or resources.
Disciplinary bias
Greater accuracy in STEM fields than in humanities or social sciences.
Greater accuracy in STEM fields than in humanities or social sciences.
Biases in AI-generated images
Images may reflect stereotypical associations, such as male executives, under-representation of female scientists, or family models based on a single cultural norm.
Images may reflect stereotypical associations, such as male executives, under-representation of female scientists, or family models based on a single cultural norm.