|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Bayesians turn to Experts for Advice!
If you have a question about this talk, please contact Julia Blackwell.In Bayesian model selection and model averaging, inference is normally based on a posterior distribution on the models, usually interpreted as a measure of how likely we consider each of the models to be “true”, or at least in some sense close to true, given on the observations. Rather than with truth, I will be concerned with the more practical goal of finding a “useful” model, in the sense that it predicts future outcomes of the underlying process well. As it turns out, the most useful model may well vary depending on the number of available observations! For instance, given ten samples from some continuous density, a seven-bin histogram model is more useful than a 1,000-bin model, even though the latter is arguably closer to being “true”. As it turns out, methods for tracking transient performance of prediction strategies have already been developed in the learning theory literature under the heading “prediction with expert advice”. I will illustrate how these methods can improve model selection performance using results from computer simulations on density estimation problems.
This talk is part of the Statistical Laboratory Graduate Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsType the title of a new list here Lady Margaret Beaufort Commemoration Event Environment on the Edge Lecture Series
Other talksHack the Lab Lunchtime Talk: Helen's Bedroom Health Economics @ Cambridge seminar Café Synthetique CVD Forum: Functional interpretation of genetic variants associated with cardiovascular-related traits: an integrative genomics approach. New opportunities and challenges for electron microscopy