University of Cambridge > Talks.cam > Machine Learning @ CUED > Expectation Propagation, Experimental Design for the Sparse Linear Model

Expectation Propagation, Experimental Design for the Sparse Linear Model

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Expectation propagation (EP) is a novel variational method for approximate Bayesian inference, which has given promising results in terms of computational efficiency and accuracy in several machine learning applications. It can readily be applied to inference in linear models with non-Gaussian priors, generalised linear models, or nonparametric Gaussian process models, among others. I will give an introduction to this framework. Important aspects of EP are not well-understood theoretically. I will highlight some open problems.

I will then show how to address sequential experimental design for a linear model with non-Gaussian sparsity priors, giving some results in two different machine learning applications. These results indicate that experimental design for these models may have significantly different properties than for linear-Gaussian models, where Bayesian inference is analytically tractable and experimental design seems best understood.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity