University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Interpretability in Machine Learning

Interpretability in Machine Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Alessandro Davide Ialongo.

Abstract:

Interpretability is often considered crucial for enabling effective real-world deployment of intelligent systems. Unlike performance measures such as accuracy, objective measurement criteria for interpretability are difficult to identify. The volume of research on interpretability is rapidly growing (more than 20,000 publications related to interpretability in ML in the last five years can be found through Google Scholar). However, there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for most of these issues to be rigorously defined and activated. Recent European Union regulation will require algorithms that make decisions based on user-level predictors, which significantly affect users to provide explanation (“right to explanation”) by 2018 (GDPR). One of the taxonomies of interpretability in ML includes global and local interpretability algorithms. The former aims at getting a general understanding of how the system is working as a whole, and at knowing what patterns are present in the data. On the other hand, local interpretability provides an explanation of a particular prediction or decision.

We take a look here at two algorithms each belonging to one of the aforementioned categories. The prediction difference analysis method presents an algorithm for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. We also check an algorithm that facilitates human understanding and reasoning of a dataset via learning prototypes and criticism. The method is referred to as MMD -critic, and it is motivated by the Bayesian model criticism framework.

Recommended reading:

  • “Towards A Rigorous Science of Interpretable Machine Learning”, Finale Doshi-Velez, Been Kim, arXiv 2017.
  • “Visualizing Deep Neural Network Decisions: Prediction Difference Analysis”, Luisa Zintgraf, Taco Cohen, Tameem Adel, Max Welling, ICLR 2017 .
  • “Examples are not Enough, Learn to Criticize! Criticism for Interpretability”, Been Kim, Rajiv Khanna, Oluwasanmi Koyejo, NIPS 2016 .

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity