An introduction to adversarial attacks and defences
- 👤 Speaker: Yingzhen Li (University of Cambridge)
- 📅 Date & Time: Wednesday 08 November 2017, 17:00 - 18:30
- 📍 Venue: Cambridge University Engineering Department, CBL Seminar room BE4-38. For directions see http://learning.eng.cam.ac.uk/Public/Directions
Abstract
AI safety is not limited to RL settings. For example, we can use machine learning algorithms to design spam filters, yet attackers can still “reverse-engineer” our defence to send us junk emails. Autonomous driving systems based on computer vision techniques are also vulnerable to attacks, for instance, attackers can carefully apply a sticker to a stop sign in order to fool the vision system of the car. In this talk I will briefly discuss the mathematical framework of these attack techniques (specifically on image classifiers) and defence techniques against them.
Slides available here: http://yingzhenli.net/home/pdf/attack_defence.pdf
Series This talk is part of the Engineering Safe AI series.
Included in Lists
- Cambridge talks
- Cambridge University Engineering Department, CBL Seminar room BE4-38. For directions see http://learning.eng.cam.ac.uk/Public/Directions
- Chris Davis' list
- Engineering Safe AI
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 08 November 2017, 17:00-18:30