Adversarial Machine Learning
- π€ Speaker: Tudor Paraschivescu
- π Date & Time: Wednesday 18 October 2017, 19:00 - 19:30
- π Venue: Wolfson Hall, Churchill College
Abstract
An adversarial example is an instance of input data which has been modified in such a way that a human observer would not see the difference, but a Machine Learning model would be tricked into misclassifying it. In this talk, we are going to see how such examples could affect some Neural Network models by compromising their integrity. For example, a person could ‘paint’ a STOP sign in such a way that a self-driving car would interpret it as something completely different. We are going to explore how easy it is to generate images which will be misclassified by state-of-the-art architectures. Afterwards, we will look into currently available defences and how one can employ a transferability attack to bypass them. The talk will conclude by comparing testing with verification, seeing how each is used to appreciate the security of a Neural Network architecture.
Series This talk is part of the Churchill CompSci Talks series.
Included in Lists
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 18 October 2017, 19:00-19:30