University of Cambridge > Talks.cam > Inference Group > Bayesian Models for Dependency Parsing Using Pitman-Yor Priors

Bayesian Models for Dependency Parsing Using Pitman-Yor Priors

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact David MacKay.

In this talk, I will introduce a Bayesian dependency parsing model for natural language, based on the hierarchical Pitman-Yor process. This model arises from a Bayesian reinterpretation of a classic dependency parser. I will show that parsing performance can be substantially improved by (a) using a hierarchical Pitman-Yor process as a prior over the distribution over dependents of a word, and (b) sampling model hyperparameters. Finally, I will present a second Bayesian dependency model in which latent state variables mediate the relationships between words and their dependents. This model clusters parent-child dependencies into states using a similar approach to that employed by Bayesian topic models when clustering words into topics. Each latent state may be viewed as a sort of specialised part-of-speech tag or “syntactic topic” that captures the relationships between words and their dependents.

This talk is part of the Inference Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity