University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Gaussian Processes I have Known

Gaussian Processes I have Known

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Elre Oldewage.

Since I first used Gaussian processes (GPs) in a paper in 1978, they have turned out to be very powerful tools in many application areas, including machine learning. I will briefly cover several of the areas where I have personally used them, with particular emphasis on two.

The first is the field that has come to be known as uncertainty quantification, which concerns quantifying uncertainty in the predictions of simulators, i.e. mechanistic computer codes. A GP is used to model the simulator, treating it as a function that maps inputs to outputs, so this kind of use will be familiar. The GP is then called an emulator. But second GP models another function, the model discrepancy, defined as the difference between the simulator output and reality – a necessary component because no simulation model is perfect.

And a related use arises because we can’t quantify uncertainty about the outputs unless we quantify uncertainty in the inputs, including the often many uncertain parameters of the simulator. That involves eliciting the knowledge of experts about those parameters and expressing it in the form of a probability distribution. My second application of GPs is to represent uncertainty about the elicited distribution.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity