![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Adversarial Explanations - You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods
Adversarial Explanations - You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation MethodsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Mateja Jamnik. Transparency of algorithmic systems has been discussed as a way for end-users and regulators to develop appropriate trust in machine learning models. One popular approach, LIME (Ribeiro et al 2016) even suggests that model explanations can answer the question ``Why should I trust you?’’ Here we show a straightforward method for modifying a pre-trained model to manipulate the output of many popular feature importance explanation methods with little change in accuracy, thus demonstrating the danger of trusting such explanation methods. We show how this explanation attack can mask a model’s discriminatory use of a sensitive feature, raising strong concerns about using such explanation methods to check model fairness. This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsAfrican Society of Cambridge University (ASCU) Meeting the Challenge of Healthy Ageing in the 21st Century Japanese Society in Cambridge ケンブリッジ日本人会Other talksProjects and Medical Devices within the NHS A Cosmic Microscope for the Preheating Era Probing Cosmic Dawn with the most distant galaxies Statistics Clinic Summer 2020 - Skype session I A mathematical theory of deep neural networks |