|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
A Hierarchical Bayesian Language Model based on Pitman-Yor Processes
If you have a question about this talk, please contact Shakir Mohamed.
I will be discussing:
N-gram language modelling traditionally uses some form of “smoothing” technique to allocate some probability mass to unseen N-grams. Over the years people have come up with smoothing schemes that perform pretty well, but it’s not easy to get a handle on what they’re doing, and how to improve them.
In this paper, Teh shows that a hierarchical Bayesian language model with a very simplistic model of context performs pretty much as well as the current state of the art smoothing schemes, and in fact has strong similarities to an existing smoothing scheme.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsStem Cell Seminars and Events in Cambridge Type the title of a new list here Rainbow Interaction Seminars
Other talksInherent Inconsistency of EU Energy Policymaking Disease dynamics in animal hosts: How natural selection affects disease transmission in insects; and how animal density and climatic factors can influence the prevalence of zoonotic diseases Law, statistics and psychology, do they match? Computers helping chemists: a toolkit for a ChemBio lab Biogenesis of iron-sulfur proteins in eukaryotes: Mitochondria, mitoses, mechanisms, DNA maintenance, and maladies LARMOR LECTURE - Nanosructured steel: the challenge of manufacture