University of Cambridge > > Statistics > Approximate Inference for the Loss-Calibrated Bayesian

Approximate Inference for the Loss-Calibrated Bayesian

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Richard Nickl.

Bayesian decision theory provides a well-defined theoretical framework for rational decision making under uncertainty. However, even if we assume that our subjective beliefs about the world have been well-specified, we usually need to resort to approximations in order to use them in practice. Despite the central role of the loss in the decision theory formulation, most prevalent Bayesian approximation methods focus on approximating the posterior over parameters with no consideration of the loss. In this talk, our main point is to bring back in focus the need to calibrate the approximation methods to the loss under consideration. This philosophy has already been widely applied in the frequentist statistics / discriminative machine learning literature, as for example with the use of surrogate loss functions, but not in Bayesian statistics surprisingly. We provide examples showing the limitation of disregarding the loss in standard approximate inference schemes and outline several interesting research directions arising from this new perspective. As a first loss-calibrated attempt, we propose an EM-like algorithm on the Bayesian posterior risk and show how it can improve a standard approach to Gaussian process classification when the losses are asymmetric.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2017, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity