BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Decision Boundary Geometries and Robustness of Neural Networks - S
 ven Wang (University of Cambridge)
DTSTART:20180214T170000Z
DTEND:20180214T183000Z
UID:TALK101404@talks.cam.ac.uk
CONTACT:Adrià Garriga Alonso
DESCRIPTION:Adversarial examples are small perturbations to an input point
  that cause a Neural Network (NN) to misclassify it.\n\nSome recent resear
 ch shows the existence of "universal adversarial perturbations" which\, un
 like previous adversarial examples\, are not specific to data points and n
 etwork architectures. We will also talk about some results which try to li
 nk this behaviour to the geometry of decision boundaries learned by neural
  networks.\n\nAdversarial inputs by themselves aren't the main concern for
  the value alignment problem. However\, the insight they can give about NN
  internals will be important if future AIs rely on NNs at all.\n\nRelevant
  readings:\nThe Robustness of Deep Networks: A Geometrical Perspective\nht
 tp://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8103145&tag=1 \n\nAdvers
 arial Spheres https://arxiv.org/abs/1801.02774
LOCATION: Cambridge University Engineering Department\, CBL Seminar room B
 E4-38.  For directions see http://learning.eng.cam.ac.uk/Public/Directions
END:VEVENT
END:VCALENDAR
