University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Fairness in AI

Fairness in AI

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Elre Oldewage.

Applications of machine learning almost always involve data about people. Unfortunately, training data for such applications usually encode the demographic disparities that are present (or have historically been present) in our society. This, in turn, may lead to machine learning solutions that re-enforce discrimination—not necessarily on purpose, but due to bias in the training data. As machine learning researchers, the solution may seem simple: get better, unbiased data. However, as we will discuss, bias is nigh impossible to escape completely. Instead, we must develop techniques to identify and counter biased decision-making by ML models.

In this talk, we will provide motivational examples and discuss a number of sources from which bias and discrimination may arise. We will consider different definitions for fairness, eventually boiling them down to three main criteria: independence, separation and sufficiency. We’ll also discuss a number of bias mitigation strategies, by which these criteria may be achieved. Finally, we’ll discuss the limitations of the three observational criteria and briefly discuss fairness from a causal perspective.

Recommended reading:

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. A Survey on Bias and Fairness in Machine Learning. arXiv e-prints, art. arXiv:1908.09635, 2019. Available at: https://arxiv.org/abs/1908.09635

Additional reading, for those interested in the topic, is the Fair ML Book, available for free online:

Barocas, S., Hardt, M., and Narayanan, A. Fairness and Machine Learning. fairmlbook.org, 2019. Available at: http://www.fairmlbook.org

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity