University of Cambridge > Talks.cam > Cambridge Technology & New Media Research Cluster > Human Values and Explainable Artificial Intelligence

Human Values and Explainable Artificial Intelligence

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Tellef S. Raabe.

A common objection to the use of artificial intelligence in decision-making is the concern that it is often difficult to explain or understand how AI systems make decisions. There is a growing body of technical AI research developing techniques for making AI more “explainable” or “interpretable”. However, it is still not well understood why this is an important property for an AI system to possess, or what types of explanations are most important. While there are empirical studies of which types of explanations individuals subjected to AI decision-making find satisfactory, psychological evidence suggests people’s sense of understanding is often unreliable and easy to manipulate. In this paper, I argue that a pragmatist account of explanation provides a fruitful framework for exploring the problem of AI Explainability, which allows us to combine normative and empirical perspectives on user values.

This talk is part of the Cambridge Technology & New Media Research Cluster series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity