Boosting AI-driven multimodal decoding of Speech and Language from Brain Signals (brAIn2lang, AIA2025-163317-C31)
- Personal investigador:
- Eva Navas, Ibon Saratxaga, Jon Sanchez, Ander Barrena, Xabier de Zuazo, Ekain Arrieta
- Periodo:
- desde 2025 hasta 2029
- Entidad financiadora:
- Ministerio de Ciencia, Innovación y Universidades
- Importe total:
- 596.625€
- Descripción:
-
This subproject is part of the coordinated project brAIn2lang, which aims to develop AI technologies that decode language from brain activity. This subproject focuses on the use of magnetoencephalography (MEG), a non-invasive neuroimaging technique, to study how speech is represented in the brain. It addresses a key scientific and technological challenge: generating intelligible spoken or written language directly from neural signals, with potential applications in communication aids and humanmachine interaction.
We will collect high-quality MEG data from native Spanish speakers across four experimental conditionslistening, speaking aloud, silent articulation, and imagined speechto disentangle the neural processes involved in speech production and perception. It will use this data to develop AI models that map brain signals to linguistic outputs, employing contrastive learning, transformer architectures, and speech generation techniques.
The project will contribute reproducible datasets, open-source decoding pipelines, and tools for model interpretability. By analyzing how models represent speech-related brain activity, the project will help uncover the neural basis of language and support the development of explainable braincomputer interfaces. Its work directly contributes to the development of next-generation, AI-driven braincomputer interfaces that enable more natural, adaptive, and inclusive communication, advancing the design of intelligent systems capable of aligning with users internal states, an essential step toward hybrid humanmachine intelligence.