University of Cambridge > > NLIP Seminar Series > Infinite Hidden Markov Models and Applications in NLP

Infinite Hidden Markov Models and Applications in NLP

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Johanna Geiss.

Since its invention 40 years ago, the Hidden Markov Model (HMM) has been successfully applied to domains such as vision, biology, natural language processing, etc. This success is arguably due to fast methods to do inference (forward-backward algorithm) and parameter learning (EM, Variational Bayes, etc.) In the standard supervised NLP application context, the number of hidden states (sometimes called the capacity of the HMM ) is chosen according to the (labelled) dataset used. Recent work (Goldwater & Griffiths 2007, Johnson 2007) has shown that unsupervised HMMs can be used efficiently to learn POS taggers from unlabelled data. However, the capacity used in that work is fixed in advance, which is not desirable when tackling new datasets/tasks and furthermore restricts the knowledge that can be learned from the data.

Recently, the machine learning community has turned its attention to nonparametric Bayesian methods. This framework allows us to treat the capacity of a model as a parameter which we want to learn. In this talk, I will introduce how nonparametric methods can be used to construct a nonparametric version of the HMM . I will compare the infinite HMM with other HMM models in the context of part-of-speech tagging.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity