University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Generative models for few-shot prediction tasks

Generative models for few-shot prediction tasks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact jg801.

Few-shot density estimation lies at the core of current meta-learning (or ‘learning to learn’) research and is crucial for intelligent systems to be able to adapt quickly to unseen tasks. In this talk we will introduce generative query networks (GQN – published in Science last year), a generative model for few-shot scene understanding that learns to capture the main features of synthetic 3D scenes. In the second half of the talk we will cover neural processes (NPs), a generalisation of the GQN training regime a wider range of tasks like regression and classification. NPs are inspired by the flexibility of stochastic processes such as Gaussian processes, but are structured as neural networks and trained via gradient descent. We show how NPs make accurate predictions after observing only a handful of training data points, yet scale to complex functions and large datasets.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity