University of Cambridge > > Cambridge Ellis Unit > Advances in Neural Processes

Advances in Neural Processes

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact ellis-admin.

The Cambridge ELLIS Unit has started a Seminar Series that will include talks by leading researchers in the area of machine learning and AI. Our next speaker will be Prof. Richard Turner. Details of his talk can be found below.

Title: “Advances in Neural Processes”

Abstract: Traditional deep learning algorithms fit their parameters to a dataset using an iterative method like gradient descent and return predictions at a set of user-specified locations. Taken together, a deep learning algorithm can be viewed as a function—albeit a complex one—that takes the dataset as input, computes model parameters, and returns predictions. In a neural process, this mapping from datasets to predictive distributions is modelled directly instead. In this way, neural processes blend ideas from deep learning, with those of stochastic processes like the famous Gaussian Process. They also connect to meta-learning approaches as they are trained by considering a set of tasks. Neural processes have a host of promising applications for example to irregularly sampled time-series that are encountered in healthcare settings and off-the-grid spatial data which is encountered in environmental science. There has been a huge amount of recent activity in this area. My talk will give a short tutorial on neural processes and explain some of the recent contributions my group has made to this research effort.

This talk is part of the Cambridge Ellis Unit series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2021, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity