University of Cambridge > Talks.cam > Natural Language Processing Reading Group > Randomized Language Models via Perfect Hash Functions

Randomized Language Models via Perfect Hash Functions

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Diarmuid Ó Séaghdha.

At this session of the NLIP Reading Group we’ll be discussing the following paper:

David Talbot and Thorsten Brants. 2008. Randomized Language Models via Perfect Hash Functions. In Proceedings of ACL -08.

Abstract: We propose a succinct randomized language model which employs a perfect hash function to encode fingerprints of n-grams and their associated probabilities, backoff weights, or other parameters. The scheme can represent any standard n-gram model and is easily combined with existing model reduction techniques such as entropy-pruning. We demonstrate the space-savings of the scheme via machine translation experiments within a distributed language modeling framework.

This talk is part of the Natural Language Processing Reading Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity