COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

University of Cambridge > Talks.cam > Statistics > Learning rates in Bayesian nonparametrics: Gaussian process priors

## Learning rates in Bayesian nonparametrics: Gaussian process priorsAdd to your list(s) Download to your calendar using vCal - Aad van der Vaart (Vrije Univ. Amsterdam)
- Friday 27 November 2009, 16:00-17:00
- MR5, CMS, Wilberforce Road, Cambridge, CB3 0WB.
If you have a question about this talk, please contact HoD Secretary, DPMMS. Joint with Probability Series The sample path of a Gaussian process can be used as a prior model for an unknown function that we wish to estimate. For instance, one might model a regression function or log density a priori as the sample path of a Brownian motion or its primitive, or some stationary process. Viewing this prior model as a formal prior distribution in a Bayesian set-up, we obtain a posterior distribution in the usual way, which, given the observations, is a probability distribution on a function space. We study this posterior distribution under the assumption that the data is generated according to some given true function, and are interested in whether the posterior contracts to the true function if the informativeness in the data increases indefinitely, and at what speed. For Gaussian process priors this rate of contraction rate can be described in terms of the small ball probability of the Gaussian process and the position of the true parameter relative to its reproducing kernel Hilbert space. Typically the prior has a strong influence on the contraction rate. This dependence can be alleviated by scaling the sample paths. For instance, an infinitely smooth, stationary Gaussian process scaled by an inverse Gamma variable yields a prior distribution on functions such that the posterior distribution adapts to the unknown smoothness of the true parameter, in the sense that contraction takes place at the minimax rate for the true smoothness. This talk is part of the Statistics series. ## This talk is included in these lists:- All CMS events
- All Talks (aka the CURE list)
- CMS Events
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- DPMMS Lists
- DPMMS info aggregator
- DPMMS lists
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- MR5, CMS, Wilberforce Road, Cambridge, CB3 0WB
- Machine Learning
- School of Physical Sciences
- Statistical Laboratory info aggregator
- Statistics
- Statistics Group
- bld31
- custom
- rp587
Note that ex-directory lists are not shown. |
## Other listsEarly Detection Forum New Directions in the Study of the Mind Machine Learning Journal Club## Other talksReading and Panel Discussion with Emilia Smechowski Beacon Salon #7 Imaging Far and Wide CANCELLED Jennifer Luff: Secrets, Lies, and the 'Special Relationship' in the Early Cold War Kolmogorov Complexity and Gödel’s Incompleteness Theorems Positive definite kernels for deterministic and stochastic approximations of (invariant) functions What we don’t know about the Universe from the very small to the very big : ONE DAY MEETING |