University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Elre Oldewage.

Variational Auto-encoders (VAEs) are deep generative latent variable models that are widely used for a number of downstream tasks—semi-supervised learning, learning compressed and disentangled representations, adversarial robustness. VAEs are widely used because they are easy to implement and train; in particular, the common choice of mean-field Gaussian (MFG) approximate posteriors for VAEs (MFG-VAE) results in an inference procedure that is straight-forward to implement and stable in training. Unfortunately, a growing body of work has demonstrated that MFG -VAEs suffer from a variety of pathologies, including learning un-informative latent codes and un-realistic data distributions. When the data consists of images or text, we often rely on “gut checks” to ensure the quality of the learned latent representations and generated data is high, but for numeric data (e.g. medical EHR data), we cannot rely on such gut checks. Existing work lacks a characterization of exactly when these pathologies occur and how they impact down-stream task performance. In this talk, we will characterize when VAE training exhibits pathologies (as global optima of the ELBO ) and connect these failure modes to undesirable effects on specific downstream tasks.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity