BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:AI+Pizza June 2018 - Microsft Research Cambridge/University of Cam
 bridge
DTSTART:20180622T163000Z
DTEND:20180622T180000Z
UID:TALK107617@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:*Speaker 1*: Andrey Manlinin\n\n*Title*: Estimating Predictive
  Uncertainty in Deep Learning\n\n*Abstract*: Estimating how uncertain an A
 I system is in its predictions is important to improve the safety of such 
 systems. Uncertainty in predictive can result from uncertainty in model pa
 rameters\, irreducible data uncertainty and uncertainty due to distributio
 nal mismatch between the test and training data distributions. Different a
 ctions might be taken depending on the source of the uncertainty so it is 
 important to be able to distinguish between them. Recently\, baseline task
 s and metrics have been defined and several practical methods to estimate 
 uncertainty developed. These methods\, however\, attempt to model uncertai
 nty due to distributional mismatch either implicitly through model uncerta
 inty or as data uncertainty. This work proposes a new framework for modeli
 ng predictive uncertainty called Prior Networks (PNs) which explicitly mod
 els distributional uncertainty. PNs do this by parameterizing a prior dist
 ribution over predictive distributions. This work focuses on uncertainty f
 or classification and evaluates PNs on the tasks of identifying out-of-dis
 tribution (OOD) samples and detecting misclassification on the MNIST datas
 et\, where they are found to outperform previous methods.\n\n*Speaker 2*: 
 Chris Cremer\n\n*Title*:Inference Suboptimality in Variational Autoencoder
 s\n\n*Abstract*: Amortized inference allows latent-variable models trained
  via variational learning to scale to large datasets. The quality of appro
 ximate inference is determined by two factors: a) the capacity of the vari
 ational distribution to match the true posterior and b) the ability of the
  recognition network to produce good variational parameters for each datap
 oint. We examine approximate inference in variational autoencoders in term
 s of these factors. We find that divergence from the true posterior is oft
 en due to imperfect recognition networks\, rather than the limited complex
 ity of the approximating distribution. We show that this is due partly to 
 the generator learning to accommodate the choice of approximation. Further
 more\, we show that the parameters used to increase the expressiveness of 
 the approximation play a role in generalizing inference rather than simply
  improving the complexity of the approximation.\n
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
