University of Cambridge > Talks.cam > NLIP Seminar Series > Neural Variational Inference for NLP

Neural Variational Inference for NLP

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kris Cao.

Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we discuss about introducing an inference network conditioned on the discrete text input to provide the variational distribution in the latent variable models for NLP . For models with continuous latent variables associated with particular distributions, such as Gaussians, there exist reparameterisations (Kingma & Welling, 2014; Rezende et al., 2014) of the distribution permitting unbiased and low-variance estimates of the gradients with respect to the parameters of the inference network. For models with discrete latent variables, Monte-Carlo estimates of the gradient must be employed. Generally, algorithms such as REINFORCE have been used effectively to decrease variance and improve learning (Mnih & Gregor, 2014; Mnih et al., 2014). In this talk, I will talk about the latent variable models applied for NLP with continuous or discrete latent variables, and their corresponding neural variational inference methods.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity