|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Bayesians turn to Experts for Advice!
If you have a question about this talk, please contact Julia Blackwell.In Bayesian model selection and model averaging, inference is normally based on a posterior distribution on the models, usually interpreted as a measure of how likely we consider each of the models to be “true”, or at least in some sense close to true, given on the observations. Rather than with truth, I will be concerned with the more practical goal of finding a “useful” model, in the sense that it predicts future outcomes of the underlying process well. As it turns out, the most useful model may well vary depending on the number of available observations! For instance, given ten samples from some continuous density, a seven-bin histogram model is more useful than a 1,000-bin model, even though the latter is arguably closer to being “true”. As it turns out, methods for tracking transient performance of prediction strategies have already been developed in the learning theory literature under the heading “prediction with expert advice”. I will illustrate how these methods can improve model selection performance using results from computer simulations on density estimation problems.
This talk is part of the Statistical Laboratory Graduate Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsMineral Sciences Seminars Engineering Department Structures Research Seminars Sociolinguistics Seminar
Other talksFemtosecond Laser Microstructuring and Modification of Transparent Materials A derivation of the third law of thermodynamics Complications during pregnancy in Uganda: Researching socio-cultural drivers using innovative methods (King's/Cambridge-Africa Seminar) Biggest Challenges for Kotlin: Interoperability and Tooling Human comparative genomics: great ape genome variation and epigenetic evolution. The Discovery of Richard III