University of Cambridge > > Machine Learning @ CUED > Bayesian optimality and frequentist extended admissibility are equivalent in saturated models

Bayesian optimality and frequentist extended admissibility are equivalent in saturated models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Louise Segar.

For finite parameter spaces under finite loss, every Bayesian procedure derived from a prior with full support is admissible, and every admissible procedure is Bayes. This relationship begins to break down as we move to continuous parameter spaces. Under some regularity conditions, admissible procedures can be shown to be the limits of Bayesian procedures. Under additional regularity, they are generalized Bayesian, i.e., they minimize the Bayes risk with respect to an improper prior. In both these cases, one must venture beyond the strict confines of Bayesian analysis. Using methods from mathematical logic and nonstandard analysis, we introduce the class of nonstandard Bayesian decision procedures—-namely, those whose Bayes risk with respect to some prior is within an infinitesimal of the optimal Bayes risk. Without any regularity conditions, we show that a decision procedure is extended admissible if and only if its nonstandard extension is nonstandard Bayes. We apply the nonstandard theory to derive a purely standard theorem: on a compact parameter space, every extended admissible estimator is Bayes if the risk function is continuous.

Joint work with Haosui Duanmu.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2020, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity