BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Adversarial Machine Learning - Tudor Paraschivescu
DTSTART:20171018T180000Z
DTEND:20171018T183000Z
UID:TALK93934@talks.cam.ac.uk
CONTACT:Matthew Ireland
DESCRIPTION:An adversarial example is an instance of input data which has 
 been modified in such a way that a human observer would not see the differ
 ence\, but a Machine Learning model would be tricked into misclassifying i
 t. In this talk\, we are going to see how such examples could affect some 
 Neural Network models by compromising their integrity. For example\, a per
 son could 'paint' a STOP sign in such a way that a self-driving car would 
 interpret it as something completely different. We are going to explore ho
 w easy it is to generate images which will be misclassified by state-of-th
 e-art architectures. Afterwards\, we will look into currently available de
 fences and how one can employ a transferability attack to bypass them.  Th
 e talk will conclude by comparing testing with verification\, seeing how e
 ach is used to appreciate the security of a Neural Network architecture.
LOCATION:Wolfson Hall\, Churchill College
END:VEVENT
END:VCALENDAR
