COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

University of Cambridge > Talks.cam > Machine Learning @ CUED > Scaling and Generalizing Approximate Bayesian Inference

## Scaling and Generalizing Approximate Bayesian InferenceAdd to your list(s) Download to your calendar using vCal - Prof. David Blei (Columbia University)
- Tuesday 12 July 2016, 11:00-12:00
- James Dyson Building Meeting Room on the Ground Floor.
If you have a question about this talk, please contact Louise Segar. Latent variable models have become a key tool for the modern statistician, letting us express complex assumptions about the hidden structures that underlie our data. Latent variable models have been successfully applied in numerous fields. The central computational problem in latent variable modeling is posterior inference, the problem of approximating the conditional distribution of the latent variables given the observations. Posterior inference is central to both exploratory tasks and predictive tasks. Approximate posterior inference algorithms have revolutionized Bayesian statistics, revealing its potential as a usable and general-purpose language for data analysis. Bayesian statistics, however, has not yet reached this potential. First, statisticians and scientists regularly encounter massive data sets, but existing approximate inference algorithms do not scale well. Second, most approximate inference algorithms are not generic; each must be adapted to the specific model at hand. In this talk I will discuss our recent research on addressing these two limitations. I will describe stochastic variational inference, an approximate inference algorithm for handling massive data sets. I will demonstrate its application to probabilistic topic models of text conditioned on millions of articles. Then I will discuss black box variational inference. Black box inference is a generic algorithm for approximating the posterior. We can easily apply it to many models with little model-specific derivation and few restrictions on their properties. I will demonstrate its use on longitudinal models of healthcare data, deep exponential families, and discuss a new black-box variational inference algorithm in the Stan programming language. This is joint work based on these three papers: M. Hoffman, D. Blei, J. Paisley, and C. Wang. Stochastic variational inference. Journal of Machine Learning Research, 14:1303-1347, 2013.
R. Ranganath, S. Gerrish, and D. Blei. Black box variational inference. Artificial Intelligence and Statistics, 2014.
A. Kucukelbir, R. Ranganath, A. Gelman, and D. Blei. Automatic variational inference in Stan. Neural Information Processing Systems, 2015.
This talk is part of the Machine Learning @ CUED series. ## This talk is included in these lists:- Seminar
- All Talks (aka the CURE list)
- Biology
- CBL important
- Cambridge Big Data
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge Neuroscience Seminars
- Cambridge University Engineering Department Talks
- Centre for Smart Infrastructure & Construction
- Chris Davis' list
- Creating transparent intact animal organs for high-resolution 3D deep-tissue imaging
- Featured lists
- Guy Emerson's list
- Inference Group Summary
- Information Engineering Division seminar list
- James Dyson Building Meeting Room on the Ground Floor
- Joint Machine Learning Seminars
- Life Science
- Life Sciences
- Machine Learning @ CUED
- Machine Learning Summary
- Neuroscience
- Neuroscience Seminars
- Neuroscience Seminars
- Required lists for MLG
- School of Technology
- Simon Baker's List
- Stem Cells & Regenerative Medicine
- Trust & Technology Initiative - interesting events
- bld31
- dh539
- ndk22's list
- rp587
Note that ex-directory lists are not shown. |
## Other listsMilcho Manchevski in Cambridge Cavendish Graduate Students' Conference, December 2009 J M Keynes Fellowship Fund Lectures## Other talksNatHistFest: the 99th Conversazione and exhibition on the wonders of the natural world. Migration in Science CANCELED DUE TO USS PENSIONS STRIKE. Bayesian deep learning Rhys Jones: Temporal Claustrophobia at the Continental Congress, 1774-1776 |