BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Pizza & AI June 2019 - Microsoft Research/University of Cambridge
DTSTART:20190607T163000Z
DTEND:20190607T180000Z
UID:TALK125887@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:*Speaker 1* - Andrew Fitzgibbon\n\n*Title* - Big data\, small 
 data\, oddly-shaped data: Welcome to “All Data” AI\n\n*Abstract* - I
 ’m happy with the term “AI” — it just means doing cool stuff with 
 data.  We’ve seen great successes with computer vision\, natural languag
 e processing\, and a host of other applications.  However\, I’m not so h
 appy when we shoehorn every problem into a BxWxHxC block of numbers to fit
  the constraints of GPU hardware.  As the new head of the All Data AI (ADA
 ) group at Microsoft Cambridge\, I’m excited by a future where we can ap
 ply AI in traditional “big data” scenarios\, in “small data” scena
 rios where we need to learn fast from limited examples\, in the crossover 
 area where we may have millions of related subproblems\, each data-poor\, 
 but jointly data-rich.  I’m excited to apply AI to structured data like 
 graphs\, molecules\, program code.   And I’ll talk about the compounding
  of excitement that results from applying these techniques to shipping pro
 ducts that impact millions of real users.\n\n\n\n*Speaker 2* - John Bronsk
 ill\n\n*Title* - Fast and Flexible Multi-Task Classification Using Conditi
 onal Neural Adaptive Processes\n\n*Abstract* - This talk will describe our
  recent work on designing image classification systems that\, after an ini
 tial multi-task training phase\, can automatically adapt to new tasks enco
 untered at test time. I will introduce an approach that relates to existin
 g approaches to meta-learning and so-called conditional neural processes\,
  generalising them to the multi-task classification setting. The resulting
  approach\, called Conditional Neural Adaptive Processes (CNAPS)\, compris
 es a classifier whose parameters are modulated by an adaptation network th
 at takes the current task's dataset as input. I will show that CNAPS achie
 ves state-of-the-art results on the challenging Meta-Dataset few-shot lear
 ning benchmark indicating high-quality transfer-learning which is robust\,
  avoiding both over-fitting in low-shot regimes and under-fitting in high-
 shot regimes. Timing experiments reveal that CNAPS is computationally effi
 cient at test-time as it does not involve gradient based adaptation. Final
 ly\, I will show that trained models are immediately deployable to continu
 al learning and active learning where they can outperform existing approac
 hes that do not leverage transfer learning.\n\n
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
