Ruta de navegación

Publicador de contenidos

Defensa de tesis doctoral: New perspectives on machine learning fairness: algorithmic design and evaluation

Autora: Ainhize Barrainkua Aguirre

Tesis: New perspectives on machine learning fairness: algorithmic design and evaluation

Dirección: Jose Antonio Lozano / Novi Quadrianto

Día: 30 de enero de 2026
Ordua: 11:30h
Lekua: sala Ada Lovelace (Facultad de informática)

Abstract:

"Algorithmic systems are increasingly embedded in critical decision-making processes across finance, hiring, healthcare, and criminal justice. While these systems promise efficiency and consistency, they also risk perpetuating societal biases, often producing unequal outcomes for different demographic groups. Traditional approaches proposed to assess fairness and mitigate bias assume idealized conditions, including full access to sensitive attributes and identically distributed training and deployment data. In practice, these assumptions are frequently violated: sensitive information may be unavailable due to legal or privacy constraints, and real-world deployment environments often differ substantially from training conditions. Thus the environments in which automated decision-making systems operate in reality, introduce new uncertainties for fairness assessment and bias mitigation, posing significant challenges for existing approaches that are often ill-suited to function under such conditions.

This thesis tackles these limitations by systematically integrating such sources of uncertainty and non-ideal conditions into both the evaluation and enhancement of the fairness guarantees of classifiers. We first introduce a framework for reliably auditing fairness guarantees, addressing the inherent instability of fairness metrics and enabling robust comparisons across classifiers. Next, we provide a comprehensive review of methods for ensuring fairness under distribution shift, offering a novel taxonomy and a broad perspective on related work and open challenges. We then develop a fairness-enhancing intervention for settings where sensitive attributes are unavailable, providing theoretical guarantees that extend fairness assessment to contexts with partial access to demographic information. Finally, we examine fairness in recourse-aware systems, which deliver actionable feedback to individuals affected by negative algorithmic predictions. We demonstrate that widely used metrics for evaluating fairness in recourse frequently fail to capture significant fairness issues, and introduce a novel framework that provides a holistic perspective on biases, accompanied by a practical mitigation intervention to promote more equitable outcomes in these pipelines.

Through a combination of theoretical analysis, practical algorithms, and extensive empirical evaluation, this thesis contributes a comprehensive framework for fair decision-making under more realistic conditions. The proposed methods extend the applicability of fairness interventions beyond idealized settings, providing robust, actionable tools for designing and auditing algorithms that operate responsibly in complex, real-world environments."