University of Cambridge > Talks.cam > Inference Group > Approximate Inference for the Loss-Calibrated Bayesian

Approximate Inference for the Loss-Calibrated Bayesian

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Emli-Mari Nel.

Bayesian decision theory provides a well-defined theoretical framework for rational decision making under uncertainty. However, even if we assume that our subjective beliefs about the world have been well-specified, we usually need to resort to approximations in order to use them in practice. Despite the central role of the loss in the decision theory formulation, most prevalent approximation methods seem to focus on approximating the posterior over parameters with no consideration to the loss. In this talk, our main point is to bring back in focus the need to calibrate the approximation methods to the loss under consideration. This philosophy has already been widely applied in the frequentist statistics / discriminative machine learning literature, as for example with the use of surrogate loss functions. In contrast, the “loss-calibrated” approximation approach seems to have been mainly limited in Bayesian statistics to simple settings and losses such as regression with quadratic loss or hypothesis testing with 0-1 loss. We provide examples showing the limitation of disregarding the loss in standard approximate inference schemes and explore loss-calibrated alternatives.

This talk is part of the Inference Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity