University of Cambridge > Talks.cam > NLIP Seminar Series > Numerically Grounded Language Models

Numerically Grounded Language Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kris Cao.

Assisted text input and editing tools can save time and effort and improve text quality. For example, word prediction presents the user with a list of choices for the next word; word completion helps them complete the word that they have started typing; error detection identifies erroneous spans of text; error correction is recommending amendments to the text. Traditional language models fail to adequately address such task in domains that are rich in numerical mentions (e.g. clinical reports).

In this talk, I discuss extensions to neural language models that are sensitive to numerical values encountered in the text itself. Continuous numerical mentions provide exact measurements of the state of the world without mapping to discrete verbal categories. Therefore, the resulting models are grounded through numbers. The proposed framework can also be used for language modeling conditional on structured knowledge bases with inconsistent schemas or missing attributes of variable types. Finally, I will present experimental results for text input and editing tasks where numerically grounded and conditional models yield state-of-the-art results.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity