University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Learning rates in Bayesian nonparametrics

Learning rates in Bayesian nonparametrics

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Shakir Mohamed.

In semiparametric and nonparametric statistics the unknown parameter is a function (e.g. regression function, density).

A Bayesian method starts, as usual, by the specification of a prior distribution on the parameter, which is equivalent to modelling this function as a sample path of a stochastic process. Next Bayes’ rule does the work and comes up with the resulting posterior distribution, which is a probability distribution on a function space.

After giving examples of priors, and discussing the way prior and posterior can be visualised, we focus on studying the posterior distribution under the (nonBayesian) assumption that the data is generated according to some fixed true distribution. We are interested in whether, and if so how fast, a sequence of posterior distributions contracts to the true parameter if the amount of data increases. We review general results and examples, including Gaussian process priors. The general message is that, unlike in parametric statistics, a prior often does not wash out, and has a big influence on the posterior.

This dependence may be alleviated by another round of prior modelling, focused on a ``bandwidth’’ parameter. The resulting hierarchical Bayesian procedures can be viewed to provide an elegant and principled framework for regularization and adaptation.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity