ERA NET P2P Initiatives

ANTIDOTE: ArgumeNtaTIon-Driven explainable artificial intelligence fOr digiTal mEdicine.

Imagen

Specific programme: Joint Programming Initiative in “Explainable Machine Learning-based Artificial Intelligence (XAI) and Novel Computational Approaches for Environmental Sustainability (CES)” funded through CHIST-ERA IV and AEI “Proyectos de Cooperación Internacional” (Proyecto PCI2020-120717-2, financiado por MCIN/AEI/10.13039/501100011033 y por la Unión Europea “NextGenerationEU”/PRTR).

APCIN code: PCI2020-120717-2

UPV/EHU Partner Status: Beneficiary

UPV/EHU PI: Rodrigo Agerri

Project start: 01/04/2021
Project end:   31/07/2024
 

Brief description:  Providing high quality explanations for AI predictions based on machine learning is a challenging andcomplex task. To work well it requires, among other factors: selecting a proper level of generality/specificityof the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AItask under consideration; referring to specific elements that have contributed to the decision; making use ofadditional knowledge (e.g. metadata) which might not be part of the prediction process; selectingappropriate examples; providing evidence supporting negative hypothesis. Finally, the system needs toformulate the explanation in a clearly interpretable, and possibly convincing, way.

Given these considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low levelcharacteristics of the deep learning process are combined with higher level schemas proper of the humanargumentation capacity. The ANTIDOTE integrated vision is supported by three considerations: (i) in neuralarchitectures the correlation between internal states of the network (e.g., weights assumed by single nodes)and the justification of the network classification utcome is not well studied; (ii) high quality explanations arecrucially based on argumentation mechanisms (e.g., provide supporting examples and rejected alternatives),that are, to a large extent, task independent; (iii) in real settings, providing explanations is inherently aninteractive process, where an explanatory dialogue takes place between the system and the user.Accordingly, ANTIDOTE will exploit cross-disciplinary competences in three areas, i.e., deep learning,argumentation and interactivity, to support a broader and innovative view of explainable AI. Although weenvision a general integrated approach to explainable AI, we will focus on a number of deep learning tasks inthe medical domain, where the need for high quality explanations for clinical cases deliberation is critical.