University of Cambridge > Talks.cam > Computer Laboratory Security Seminar > Machine Learning in context of Computer Security

Machine Learning in context of Computer Security

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kieron Ivy Turk.

Machine learning (ML) has proven to be more fragile than previously thought, especially in adversarial settings. A capable adversary can cause ML systems to break at training, inference, and deployment stages. In this talk, I will cover my recent work on attacking and defending machine learning pipelines; I will describe how, otherwise correct, ML components end up being vulnerable because an attacker can break their underlying assumptions. First, with an example of attacks against text preprocessing, I will discuss why a holistic view of the ML deployment is a key requirement for ML security. Second, I will describe how an adversary can exploit the computer systems, underlying the ML pipeline, to develop availability attacks at both training and inference stages. At the training stage, I will present data ordering attacks that break stochastic optimisation routines. At the inference stage, I will describe sponge examples that soak up a large amount of energy and take a long time to process. Finally, building on my experience attacking ML systems, I will discuss developing robust defenses against ML attacks, which consider an end-to-end view of the ML pipeline.

This talk is part of the Computer Laboratory Security Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity