Pizza & AI April 2019
- π€ Speaker: Microsoft Research/University of Cambridge
- π Date & Time: Friday 26 April 2019, 17:30 - 19:00
- π Venue: Auditorium, Microsoft Research Ltd, 21 Station Road, Cambridge, CB1 2FB
Abstract
Speaker 1 – Andrey Malinin
Title – This is the EnDD: Ensemble Distribution Distillation
Abstract – Ensemble of Neural Network (NN) models are known to yield improvements in accuracy as well as robust measures of uncertainty. However, ensembles come at high computational and memory cost, which may be prohibitive for certain application. Previously, the distillation of an ensemble into a single model has been investigated. Such approaches decrease computational cost and allow a single model to achieve accuracy comparable to that of an ensemble. However, information about the diversity of the ensemble, which can yield estimates of epistemic uncertainty, is lost. Recently, a new type of model, called a Prior Network, has been introduced, which allows a single DNN to explicitly model a distribution over output distributions conditioned on the input by parameterizing a Dirichlet distribution. This work proposes an approach called Ensemble Distribution Distillation, which allows distilling an ensemble into a single Prior Network model, retaining both the improved classification performance as well as measures of diversity of the ensemble. The properties of Ensemble Distribution Distillation are investigated on a synthetic spiral dataset.
Speaker 2- Yingzhen Li
Title – Meta-Learning for Stochastic Gradient MCMC
Abstract – Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling. However, existing SG-MCMC schemes are not tailored to any specific probabilistic model, even a simple modification of the underlying dynamical system requires significant physical intuition. This paper presents the first meta-learning algorithm that allows automated design for the underlying continuous dynamics of an SG-MCMC sampler. The learned sampler generalizes Hamiltonian dynamics with state-dependent drift and diffusion, enabling fast traversal and efficient exploration of neural network energy landscapes. Experiments validate the proposed approach on both Bayesian fully connected neural network and Bayesian recurrent neural network tasks, showing that the learned sampler out-performs generic, hand-designed SG-MCMC algorithms, and generalizes to different datasets and larger architectures.
This is a joint work with Wenbo Gong and Jose Miguel Hernandez-Lobato from the University of Cambridge. The paper will be presented at ICLR 2019 .
Series This talk is part of the AI+Pizza series.
Included in Lists
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Microsoft Research/University of Cambridge
Friday 26 April 2019, 17:30-19:00