University of Cambridge > Talks.cam > Machine Learning @ CUED > Understanding Black-box Predictions via Influence Functions

Understanding Black-box Predictions via Influence Functions

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrian Weller.

How can we explain the predictions of a black-box model? In this paper, we use influence functions—a classic technique from robust statistics—to trace a model’s prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity