University of Cambridge > Talks.cam > Language Technology Lab Seminars > Towards explainable fact checking

Towards explainable fact checking

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Haim Dubossarsky.

Automatic fact checking is one of the more involved NLP tasks currently researched: not only does it require sentence understanding, but also an understanding of how claims relate to evidence documents and world knowledge. Moreover, there is still no common understanding in the automatic fact checking community of how the subtasks of fact checking — claim check-worthiness detection, evidence retrieval, veracity prediction — should be framed. This is partly owing to the complexity of the task, despite efforts to formalise the task of fact checking through the development of benchmark datasets. This talk will re-examine how fact checking is defined, and present some of my recent work on training explainable fact checking models to expose some of the reasoning processes these models follow.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity