Decision Boundary Geometries and Robustness of Neural Networks
- π€ Speaker: Sven Wang (University of Cambridge)
- π Date & Time: Wednesday 14 February 2018, 17:00 - 18:30
- π Venue: Cambridge University Engineering Department, CBL Seminar room BE4-38. For directions see http://learning.eng.cam.ac.uk/Public/Directions
Abstract
Adversarial examples are small perturbations to an input point that cause a Neural Network (NN) to misclassify it.
Some recent research shows the existence of “universal adversarial perturbations” which, unlike previous adversarial examples, are not specific to data points and network architectures. We will also talk about some results which try to link this behaviour to the geometry of decision boundaries learned by neural networks.
Adversarial inputs by themselves aren’t the main concern for the value alignment problem. However, the insight they can give about NN internals will be important if future AIs rely on NNs at all.
Relevant readings: The Robustness of Deep Networks: A Geometrical Perspective http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8103145&tag=1
Adversarial Spheres https://arxiv.org/abs/1801.02774
Series This talk is part of the Engineering Safe AI series.
Included in Lists
- Cambridge talks
- Cambridge University Engineering Department, CBL Seminar room BE4-38. For directions see http://learning.eng.cam.ac.uk/Public/Directions
- Chris Davis' list
- Engineering Safe AI
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 14 February 2018, 17:00-18:30