7 janvier 2026 à 13h30
Charlotte Laclau will give a seminar to the M1 and M2 Data Science students.
Abstract: Machine learning systems are increasingly used in sensitive applications such as hiring, credit scoring, recommendation, or fraud detection, raising important concerns about fairness. These systems may reproduce or amplify existing biases due to historical inequalities, biased data collection, or feedback effects.
In this talk, I will first introduce the main notions of algorithmic fairness, where these issues come from, and briefly review common mitigation strategies. I will then focus on a setting where fairness poses particularly subtle challenges: graph-structured data. From friendship recommendations to fraud detection, many decisions rely on network connections. I will show how biases can emerge from the graph structure itself in tasks such as edge prediction, and discuss how such structural biases can be identified and mitigated.
Salle Agora Bâtiment ESPRIT