De Miguel, an Ikerbasque Research Professor at the University of the Basque Country (UPV/EHU), has published, in the journal Bioethics, an analysis of the mechanisms used in Europe to verify that AI-based healthcare systems operate safely and do not engage in discriminatory and harmful practices. The researcher puts forward alternative policies to address the problem of bias in these types of systems.
-
Human activity reduces plant diversity hundreds of miles away
-
European controls to mitigate bias in AI healthcare systems are insufficient
-
Tackling societal challenges through culture and creativity
-
Study reveals that the final stages of the last glaciation led to human migration eastwards in prehistoric Europe
European controls to mitigate bias in AI healthcare systems are inadequate
Iñigo de Miguel questions the practice of always using larger databases to improve discrimination issues in healthcare systems that use AI
- Research
First publication date: 13/05/2025

Artificial intelligence systems are being increasingly used in all sectors, including healthcare. They can be used for different purposes; examples include diagnostic support systems (e.g., a system widely used in dermatology to determine whether a mole could develop into melanoma) or treatment recommendation systems (which, by inserting various parameters, can suggest the type of treatment best suited to the patient).
Its ability to improve and transform healthcare poses inevitable risks. One of the biggest problems with artificial intelligence systems is bias. “Bias means that there is discrimination in what an AI system is indicating. Bias is a serious problem in healthcare, because it not only leads to a loss of accuracy, but also particularly affects certain sectors of the population,” explained De Miguel. “Let us suppose that we use a system that has been trained with people from a population in which very fair skin predominates; that system has an obvious bias because it does not work well with darker skin hues.” The researcher pays particular attention to the propagation of bias throughout the system's life cycle, since “more complex AI-based systems change over time; they are not stable”.
The UPV/EHU lecturer has published an article in the journal Bioethics analysing different policies to mitigate bias in AI healthcare systems, including those that figure in recent European regulations on Artificial Intelligence and in the European Health Data Space (EHDS). De Miguel argues that “European regulations on medical products may be inadequate to address this challenge, which is not only a technical one but also a social one. Many of the methods used to verify healthcare products belong to another age, when AI did not exist. The current regulations are designed for traditional biomedical research, in which everything is relatively stable.”
On the use of larger amounts of data
The researcher supports the idea that “it is time to be creative in finding policy solutions for this difficult issue, where so much is at stake”. De Miguel acknowledges that the validation strategies for these systems are very complicated, but questions whether it is permissible to “process large amounts of personal, sensitive data to see if these bias issues can indeed be corrected. This strategy may generate risks, particularly in terms of privacy. Simply throwing more data at the problem seems like a reductionist approach that focuses exclusively on the technical components of systems, understanding bias solely in terms of code and its data. If more data are needed, it is clear that we must analyse where and how they are processed”.
In this respect, the researcher regards the fact that the set of policies analysed in the regulations on AI and in the EHDS “are particularly sensitive when it comes to establishing safeguards and limitations on where and how data will be processed to mitigate this bias. However, it would also be necessary to see who has the right to verify whether the bias is being properly addressed and in which stages of the AI healthcare system's life cycle. On this point the policies may not be so ambitious”.
Regulatory testbeds or sandboxes
In the article, De Miguel raises the possibility of including mandatory validation mechanisms not only for the design and development phases, but also for post-marketing application. “You don't always get a better system by inputting lots more data. Sometimes you have to test it in other ways.” An example of this would be the creation of regulatory testbeds for digital healthcare to systematically evaluate AI technologies in real-world settings. “Just as new drugs are tested on a small scale to see if they work, AI systems, rather than being tested on a large scale, should be tested on the scale of a single hospital, for example. And once the system has been found to work, and to be safe, etc., it can be opened up to other locations.”
De Miguel suggests that the institutions already involved in biomedical research and healthcare sectors, such as evaluation agencies or ethics committees, should participate more proactively, and that third parties—including civil society— who wish to verify that AI healthcare systems operate safely and do not engage in discriminatory or harmful practices, should be given access to validation in secure environments.
“We are aware that artificial intelligence is going to pose problems. It is important to see how we mitigate them, because eliminating them is almost impossible. At the end of the day, this boils down to how to reduce the inevitable, because we cannot scrap AI nor should it be scrapped. There are going to be problems along the way, and we must try to solve them in the best way possible, while compromising fundamental rights as little as possible,” concluded De Miguel.
Additional information
Iñigo de Miguel-Beriain holds a PhD in philosophy, law and biomedical research. He works as an Ikerbasque Research Professor at the UPV/EHU. He has published seven books and over two hundred articles and book chapters on bioethics, healthcare law, and ethical and legal issues relating to new technologies, such as AI. He has participated in over ten EU-funded projects relating to these topics. De Miguel is a lecturer in the UPV/EHU’s Department of Public Law and lectures in the Joint Degree Course in Business Administration and Management and Law.
The article referred to was published in collaboration with Guillermo Lazcoz, researcher at the Carlos III Institute of Health, the Biomedical Research Network Centre for Rare Diseases (CIBERER-ISCIII) and the Jiménez Díaz Foundation Health Research Institute (IIS-FJD).
Bibliographic reference
- Is more data always better? On alternative policies to mitigate bias in Artificial Intelligence health systems.
- Bioethics
- DOI: 10.1111/bioe.13398