BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Easter Talklets: Agnieszka and Lorena - Agnieszka Słowik (Departm
 ent of Computer Science and Technology\, University of Cambridge)\; Lorena
  Qendro (Department of Computer Science and Technology\, University of Cam
 bridge)
DTSTART:20210603T120000Z
DTEND:20210603T130000Z
UID:TALK160885@talks.cam.ac.uk
CONTACT:Agnieszka Slowik
DESCRIPTION:Speaker 1: Agnieszka Słowik\n\nTitle: Learning from multiple 
 distributions\n\nAbstract: Machine learning has proven extremely useful in
  many applications in recent years. However\, a lot of these success stori
 es stem from evaluating the algorithms on data very similar to that they w
 ere trained on. When applied to a new data distribution (for instance\, if
  the demographic group of users changes)\, machine learning algorithms fai
 l. In this talk\, I focus on the approach for achieving generalisation bas
 ed on learning from multiple data distributions. The presented research co
 ntribution is twofold: 1) I present a new dataset for evaluating out-of-di
 stribution generalisation and 2) I state a new theoretical result regardin
 g the capabilities of Distributionally Robust Optimisation\, and show how 
 this result leads to practical recommendations. The talk is based on my tw
 o recent papers: Linear unit-tests for invariance discovery and Algorithmi
 c Bias and Data Bias: Understanding the Relation between Distributionally 
 Robust Optimization and Data Curation.\n\nSpeaker 2: Lorena Qendro\n\nTitl
 e: A Probabilistic Approach Towards Training-Free Adversarial Defense in Q
 uantized CNNs\n\nAbstract:  Quantized neural networks (NN) are the common 
 standard to efficiently deploy deep learning models on tiny hardware platf
 orms. However\, we notice that quantized NNs are as vulnerable to adversar
 ial attacks as the full-precision models. With the proliferation of neural
  networks on small devices that we carry or surround us\,\nthere is a need
  for efficient models without sacrificing trust in the prediction in prese
 nce of malign perturbations. Current mitigation approaches often need adve
 rsarial training or are bypassed when the strength of adversarial examples
  is increased. \nIn this talk\, I will present a probabilistic framework t
 hat would assist in overcoming the aforementioned limitations for quantize
 d deep learning models. We will see that it is possible to jointly achieve
  efficiency and robustness by accurately enabling each module of the frame
 work without the burden of re-retraining or ad hoc fine-tuning.
LOCATION:Remote
END:VEVENT
END:VCALENDAR
