BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Infinite Hidden Markov Models and Applications in NLP - Jurgen van
  Gael\, Department of Engineering\, University of Cambridge
DTSTART:20081031T121500Z
DTEND:20081031T130000Z
UID:TALK14798@talks.cam.ac.uk
CONTACT:Johanna Geiss
DESCRIPTION:Since its invention 40 years ago\, the Hidden Markov Model (HM
 M) has been successfully applied to domains such as vision\, biology\, nat
 ural language processing\, etc. This success is arguably due to fast metho
 ds to do inference (forward-backward algorithm) and parameter learning (EM
 \, Variational Bayes\, etc.) In the standard supervised NLP application co
 ntext\, the number of hidden states (sometimes called the capacity of the 
 HMM) is chosen according to the (labelled) dataset used. Recent work (Gold
 water & Griffiths 2007\, Johnson 2007) has shown that unsupervised HMMs ca
 n be used efficiently to learn POS taggers from unlabelled data. However\,
  the capacity used in that work is fixed in advance\, which is not desirab
 le when tackling new datasets/tasks and furthermore restricts the knowledg
 e that can be learned from the data.\n\nRecently\, the machine learning co
 mmunity has turned its attention to nonparametric Bayesian methods. This f
 ramework allows us to treat the capacity of a model as a parameter which w
 e want to learn. In this talk\, I will introduce how nonparametric methods
  can be used to construct a nonparametric version of the HMM. I will compa
 re the infinite HMM with other HMM models in the context of part-of-speech
  tagging. 
LOCATION:SW01\, Computer Laboratory
END:VEVENT
END:VCALENDAR
