BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Continual Learning: Definitions\, Benchmarks\, and Approaches - Si
 ddharth Swaroop (University of Cambridge)
DTSTART:20190306T134500Z
DTEND:20190306T151500Z
UID:TALK121132@talks.cam.ac.uk
CONTACT:75379
DESCRIPTION:Continual learning is a field that brings together many areas 
 of research\, including online learning\, transfer learning\, multi-task l
 earning\, meta learning and few-shot learning. In broad terms\, in continu
 al learning\, we see data sequentially (we are not allowed to revisit old 
 data)\, and we desire good performance across all tasks observed so far. T
 his general learning setting is one step towards bridging the gap between 
 machine and human learning. One particular challenge for modern continual 
 learning is the tendency of neural networks to catastrophically forget old
  data when new training data is introduced. Despite a lot of recent intere
 st in continual learning\, a precise definition of the field remains elusi
 ve\; this is reflected in how recent papers tend to have their own desider
 ata and benchmarks to test how well their own approach performs.\n\nIn thi
 s talk\, we will focus on bringing together ideas from many papers in orde
 r to motivate and get a list of desiderata that summarises continual learn
 ing. We will then critically examine current benchmarks used in continual 
 learning: many of them hide potential flaws by only testing a few of conti
 nual learning's desiderata. Finally\, we will lay out the space of current
  continual learning approaches\, and look at a few of the state-of-the-art
  approaches [1\, 2\, 3\, 4].\n\n[1] C. V. Nguyen\, Y. Li\, T. D. Bui\, and
  R. E. Turner. "Variational continual learning"\, ICLR 2018. \n[2] J. Kirk
 patrick et al. "Overcoming catastrophic forgetting in neural networks"\, P
 roceedings of the national academy of sciences\, 114(13):3521–3526\, 201
 7.\n[3] A. A. Rusu et al. "Progressive neural networks"\, arXiv preprint a
 rXiv:1606.04671\, 2016.\n[4] J. Schwarz et al. "Progress & compress: A sca
 lable framework for continual learning"\, ICML 2018.
LOCATION:Engineering Department\, CBL Room BE-438
END:VEVENT
END:VCALENDAR
