Full Seminar Details
Dr. Ángel Pavón Pérez
KMi, The Open University

This event will take place on Tuesday 29 July 2025 at 11:30
Machine Learning systems increasingly shape critical decisions—who gets a loan, who receives healthcare support, who’s flagged for risk. However, the data behind these systems often reflects historical and societal biases, leading to unfair and potentially harmful outcomes, particularly for marginalised groups. In this talk, I’ll explore three core challenges in addressing bias in machine learning, drawing from my thesis research in the financial domain where gender bias is particularly embedded: 1. Hidden bias in the data – Sensitive attributes are often not explicitly included in datasets, but bias creeps in through proxy variables. Detecting and mitigating these hidden proxies is a major challenge, requiring methods that go beyond standard fairness checks. 2. Fairness without sensitive attributes – In many real-world scenarios, we don’t have access to information like gender or race due to privacy or legal concerns. This makes it difficult to audit or mitigate bias, raising the question: how do we build fair systems when we can't measure unfairness directly? 3. Conflicting fairness definitions – Fairness isn’t one-size-fits-all. There are multiple, sometimes incompatible definitions of fairness, and addressing one can worsen another. Understanding how to navigate and balance these trade-offs remains an open and complex problem. I’ll share how my work tackles each of these challenges—through proxy detection, knowledge transfer methods, and a large-scale study of bias mitigation strategies. The aim is to offer practical tools and insights for anyone working toward more responsible, equitable AI systems.

Maven of the Month
We are also inviting top experts in AI and Knowledge Technologies to discuss major socio-technological topics with an audience that comprises both members of the Knowledge Media Institute, as well as the wider staff at The Open University. Differently from our seminar series, these events follow a Q&A format.