University of Cambridge > Talks.cam > Language Technology Lab Seminars > Hierarchical Interpretation of Neural Text Classification

Hierarchical Interpretation of Neural Text Classification

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic.

Recent years have witnessed increasing interests in developing interpretable models in NLP . Most existing models aim at identifying input features such as words or phrases important for model predictions. Neural models developed in NLP however often compose word semantics in a hierarchical manner. Interpretation by words or phrases only thus cannot faithfully explain model decisions. In this talk, I will present our recently proposed Hierarchical Interpretable Neural Text classifier, called Hint, which is able to identify the latent semantic factors and their compositions which contribute to the model’s final decisions. This is often beyond what word-level interpretations could capture. Experimental results on both review datasets and news datasets show that our proposed approach achieves text classification results on par with existing state-of-the-art text classifiers, and generates interpretations more faithful to model predictions and better understood by humans than other interpretable neural text classifiers.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity