University of Cambridge > Talks.cam > Language Technology Lab Seminars > Learning with Explanations

Learning with Explanations

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Edoardo Maria Ponti.

Abstract: Despite the success of deep learning models in a wide range of applications, these methods suffer from low sample efficiency and opaqueness. Low sample efficiency limits the application of deep learning to domains for which abundant training data exists whereas opaqueness prevents us from understanding how a model derived a particular output, let alone how to correct systematic errors, how to remove bias, or how to incorporate common sense and domain knowledge. To address these issues for knowledge base completion, we developed end-to-end differentiable provers which (i) learn neural representations of symbols in a knowledge base, (ii) make use of similarities between learned symbol representations to prove queries to the knowledge base, (iii) induce logical rules, and (iv) use provided and induced rules for multi-hop reasoning. I will present our recent efforts in applying differentiable provers to statements in natural language texts and large-scale knowledge bases. Furthermore, I will introduce two datasets for advancing the development of models capable of incorporating natural language explanations: eSNLI, crowdsourced explanations for over half a million sentence pairs in the Stanford Natural Language Inference corpus, and ShARC, a conversational question answering dataset with natural language rules.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity