University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Deep Generative Models

Deep Generative Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Shixiang Gu.

Deep, latent variable generative models have recently attracted a lot of attention from the research community. In the first half of this talk we will present a number of recent advances in variational auto-encoder (VAE) based models, focusing on the use of probabilistic modelling tools to increase the capacity of the standard VAE and allowing it to be applied to a broader range of learning tasks. The talk will be self contained, but will only include a brief review of the standard VAE concepts. It is recommended therefore to be familiar with the concepts presented in [1].

[1] – Kingma D P , and Welling M. Auto-Encoding Variational Bayes [1]

The second half of the talk will cover a subset of the literature on tractable deep generative models, which can be trained by regularized maximum likelihood. We will cover auto-regressive models such as Pixel Convolutional Networks and models composed of a sequence of tractable transformations such as Real NVP . We will compare and contrast the various approaches discussed.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity