University of Cambridge > Talks.cam > Microsoft Research Cambridge, public talks > Bayesian Computation Without Tears: Probabilistic Programming and Universal Stochastic Inference

Bayesian Computation Without Tears: Probabilistic Programming and Universal Stochastic Inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

Latent variable modeling and Bayesian inference are appealing in theory they provide a unified mathematical framework for solving a wide range of machine learning problems but are often difficult to apply effectively in practice. Accurate inference in even simple models can seem computationally intractable, while more realistic models are difficult to even write down precisely.

In this talk, I will introduce new probabilistic programming technology that alleviates many of these difficulties. Unlike graphical models, which marries statistics with graph theory, probabilistic programming marries Bayesian inference with universal computation. Probabilistic programming can make it easier to build useful, fast machine learning software that goes significantly beyond graphical models in flexibility and power. I will illustrate probabilistic programming using page-long probabilistic programs that break simple CAPTCH As by running randomized CAPTCHA generators backwards and interpret noisy time-series data from clinical medicine.

I will also present CrossCat, a black-box, parameter free, fully Bayesian machine learning method, based on an optimized engine for one probabilistic program that learns simple but flexible probabilistic programs from data. CrossCat estimates the full joint distribution underlying high-dimensional datasets, including the noisy, incomplete tables that come from modern database systems. It also can efficiently simulate from any of its finite-dimensional conditional distributions and accurately solves problems of prediction, imputation, feature selection and classification.

Throughout, I will highlight the ways probabilistic programming points the way to a new model of computation, based on universal inference over distributions rather than universal calculation of functions, and exposes the mathematical and algorithmic structure needed to engineer efficient, distributed machine learning systems. I will include a brief discussion of natively probabilistic hardware that carries these principles down to the physical level. I will also touch on the directions this model opens up for research in computational complexity including steps towards an explanation of the unreasonable effectiveness of simple, randomized algorithms on apparently intractable problems and in programming languages and artificial intelligence.

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity