University of Cambridge > Talks.cam > NLIP Seminar Series > Strong Structural Priors for Neural Network Architectures

Strong Structural Priors for Neural Network Architectures

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kris Cao.

Many current state-of-the-art methods in natural language processing and information extraction rely on representation learning. Despite the success and wide adoption of neural networks in the field, we still face major challenges such as (i) efficiently estimating model parameters for domains where annotation is costly and only few training examples are available, (ii) interpretable representations that allow inspection and debugging of deep neural networks, as well as (iii) ways to incorporate commonsense knowledge and task-specific prior knowledge. To tackle these issues, advanced neural network architectures such as differentiable memory, attention, data structures and even Turing machines, program interpreters and theorem provers have been proposed very recently. In this talk I will give an overview of our work on such strong structural priors for sequence modeling, knowledge base completion and program induction.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity