University of Cambridge > Talks.cam > Language Technology Lab Seminars > Measuring Factuality in Text Generation: When Language Models Are Twisting the Facts

Measuring Factuality in Text Generation: When Language Models Are Twisting the Facts

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic.

Text generation is at the core of many NLP tasks like question answering, dialog generation, machine translation or text summarization. While current text generation models produce text that seems fluent and informative, their outputs often contain factual inconsistencies with respect to the inputs they rely on (a.k.a. “hallucinations”), making it hard to deploy such models in real-world applications.

In this talk I will present two of our recent works tackling those issues. First, I will describe KOBE (Gekhman et al., EMNLP Findings 2020), a knowledge-based approach for evaluating the quality of machine translation models, which uses multilingual entity resolution instead of human reference translations. I will then present Q^2 (Honovich et al., EMNLP 2021 ), an automatic evaluation metric that combines question generation, question answering and natural language inference to validate the outputs of dialogue generation models.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity