Easter Talklets: Agnieszka and Lorena
- 👤 Speaker: Agnieszka Słowik (Department of Computer Science and Technology, University of Cambridge); Lorena Qendro (Department of Computer Science and Technology, University of Cambridge)
- 📅 Date & Time: Thursday 03 June 2021, 13:00 - 14:00
- 📍 Venue: Remote
Abstract
Speaker 1: Agnieszka Słowik
Title: Learning from multiple distributions
Abstract: Machine learning has proven extremely useful in many applications in recent years. However, a lot of these success stories stem from evaluating the algorithms on data very similar to that they were trained on. When applied to a new data distribution (for instance, if the demographic group of users changes), machine learning algorithms fail. In this talk, I focus on the approach for achieving generalisation based on learning from multiple data distributions. The presented research contribution is twofold: 1) I present a new dataset for evaluating out-of-distribution generalisation and 2) I state a new theoretical result regarding the capabilities of Distributionally Robust Optimisation, and show how this result leads to practical recommendations. The talk is based on my two recent papers: Linear unit-tests for invariance discovery and Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation.
Speaker 2: Lorena Qendro
Title: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs
Abstract: Quantized neural networks (NN) are the common standard to efficiently deploy deep learning models on tiny hardware platforms. However, we notice that quantized NNs are as vulnerable to adversarial attacks as the full-precision models. With the proliferation of neural networks on small devices that we carry or surround us, there is a need for efficient models without sacrificing trust in the prediction in presence of malign perturbations. Current mitigation approaches often need adversarial training or are bypassed when the strength of adversarial examples is increased. In this talk, I will present a probabilistic framework that would assist in overcoming the aforementioned limitations for quantized deep learning models. We will see that it is possible to jointly achieve efficiency and robustness by accurately enabling each module of the framework without the burden of re-retraining or ad hoc fine-tuning.
Series This talk is part of the Women@CL Events series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge talks
- Department of Computer Science and Technology talks and seminars
- Guy Emerson's list
- Interested Talks
- Remote
- School of Technology
- Trust & Technology Initiative - interesting events
- women@CL all
- Women@CL Events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Thursday 03 June 2021, 13:00-14:00