Interpretability in Machine Learning
- đ¤ Speaker: Adrian Weller; Tameem Adel Hesham
- đ Date & Time: Thursday 09 November 2017, 13:30 - 15:00
- đ Venue: Engineering Department, CBL Seminar Room 4-38
Abstract
Abstract:
Interpretability is often considered crucial for enabling effective real-world deployment of intelligent systems. Unlike performance measures such as accuracy, objective measurement criteria for interpretability are difficult to identify. The volume of research on interpretability is rapidly growing (more than 20,000 publications related to interpretability in ML in the last five years can be found through Google Scholar). However, there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for most of these issues to be rigorously defined and activated. Recent European Union regulation will require algorithms that make decisions based on user-level predictors, which significantly affect users to provide explanation (“right to explanation”) by 2018 (GDPR). One of the taxonomies of interpretability in ML includes global and local interpretability algorithms. The former aims at getting a general understanding of how the system is working as a whole, and at knowing what patterns are present in the data. On the other hand, local interpretability provides an explanation of a particular prediction or decision.
We take a look here at two algorithms each belonging to one of the aforementioned categories. The prediction difference analysis method presents an algorithm for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. We also check an algorithm that facilitates human understanding and reasoning of a dataset via learning prototypes and criticism. The method is referred to as MMD -critic, and it is motivated by the Bayesian model criticism framework.
Recommended reading:
- “Towards A Rigorous Science of Interpretable Machine Learning”, Finale Doshi-Velez, Been Kim, arXiv 2017.
- “Visualizing Deep Neural Network Decisions: Prediction Difference Analysis”, Luisa Zintgraf, Taco Cohen, Tameem Adel, Max Welling, ICLR 2017 .
- “Examples are not Enough, Learn to Criticize! Criticism for Interpretability”, Been Kim, Rajiv Khanna, Oluwasanmi Koyejo, NIPS 2016 .
Series This talk is part of the Machine Learning Reading Group @ CUED series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Cambridge University Engineering Department Talks
- Centre for Smart Infrastructure & Construction
- Chris Davis' list
- Computational Continuum Mechanics Group Seminars
- custom
- Engineering Department, CBL Seminar Room 4-38
- Featured lists
- Guy Emerson's list
- Hanchen DaDaDash
- Inference Group Journal Clubs
- Inference Group Summary
- Information Engineering Division seminar list
- Interested Talks
- Machine Learning Reading Group
- Machine Learning Reading Group @ CUED
- Machine Learning Summary
- ML
- ndk22's list
- ob366-ai4er
- Quantum Matter Journal Club
- Required lists for MLG
- rp587
- School of Technology
- Simon Baker's List
- TQS Journal Clubs
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Adrian Weller; Tameem Adel Hesham
Thursday 09 November 2017, 13:30-15:00