University of Cambridge > Talks.cam > CCIMI Seminars > Relevance Forcing: More Interpretable Neural Networks through Prior Knowledge

Relevance Forcing: More Interpretable Neural Networks through Prior Knowledge

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Rachel Furner.

Neural networks are able to reach high accuracies across many different classification tasks. However, these ‘black-box models’ suffer from one drawback: it is generally difficult to assess how the network reached its classification decision. Nevertheless, through different relevance measures, it is possible to determine which parts of the given input contribute to the resulting output. By imposing certain penalties on this relevance, through which we can encode prior information about the problem domain, we can train models which take this information into account. If we view these relevance measures as discretized dynamical systems, we may get some insight on the reliability of their explanation.

This talk is part of the CCIMI Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity