University of Cambridge > Talks.cam > Machine Learning @ CUED > Beyond Conformal Prediction: Distribution-Free Uncertainty Quantification for Complex Machine Learning Tasks

Beyond Conformal Prediction: Distribution-Free Uncertainty Quantification for Complex Machine Learning Tasks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrian Weller.

As we begin deploying machine learning models in consequential settings like medical diagnostics or self-driving vehicles, we need ways of knowing when the model may make a consequential error (for example, that the car doesn’t hit a human). I’ll be discussing how to generate rigorous, finite-sample confidence intervals for any prediction task, any model, and any dataset, for free. This will be a chalk talk. I will primarily discuss a flexible method called Learn then Test that works for a large class of prediction problems including those with high-dimensional, structured outputs (e.g. instance segmentation, multiclass or hierarchical classification, protein folding, and so on).

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity