University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED >  Herding: Driving Deterministic Dynamics To Learn And Sample Probabilistic Models

Herding: Driving Deterministic Dynamics To Learn And Sample Probabilistic Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Konstantina Palla.

The herding algorithm was proposed as a deterministic dynamic system that integrated learning and inference for discrete Markov Random Fields. It mitigates the slow mixing problem of regular learning algorithms based on MCMC methods. Also, the pseudo-samples generated by herding enjoy a fast convergence rate on the average sufficient statistics to the empirical values in the training data set.

In this talk, I will first introduce the basic idea of the herding algorithm and talk about a few distinguished properties compared to standard learning algorithms for MRFs. Then I would like to show two extensions of herding as pure deterministic sampling algorithms that apply to discrete and continuous state space respectively. The fast convergence property is preserved in these extensions under proper conditions. If time permitted, I will also briefly introduce another application of herding to structured prediction problems including image segmentation and the Go game prediction.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity