University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Approximating the Kullback-Leibler Divergence Between GMMs

Approximating the Kullback-Leibler Divergence Between GMMs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Shakir Mohamed.

Approximating the Kullback-Leibler divergence between Gaussian mixture models http://pederao.googlepages.com/KLdiv.pdf

The Kullback-Leibler divergence is a widely used tool. Computing the KL divergence between two Gaussian mixture models analytically is not computationally tractable. Hershey and Olsen (IEEE International Conference on Acoustics, Speech, and Signal Processing 2007, http://pederao.googlepages.com/KLdiv.pdf) discuss various ways of approximating it.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2017 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity