University of Cambridge > Talks.cam > NLIP Seminar Series > Continuous feature structures: Can we learn structured representations with neural networks?

Continuous feature structures: Can we learn structured representations with neural networks?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Andrew Caines.

The basic data structure for neural network models is the vector. While this is computationally efficient, vector representations require a fixed number of dimensions, which makes it impossible to encode even basic data structures that would be familiar to a first-year undergraduate, such as lists, trees, and graphs. In this talk, I will focus on feature structures, a general-purpose data structure which is notably used in HPSG grammars. One challenge with learning such structured representations is that they are discrete, which rules out training with gradient descent. In this talk, I will present a continuous relaxation of feature structures, which allows them to be used in neural networks trained by gradient descent. In particular, I will show how these continuous feature structures can replace vectors in an LSTM , which would make it possible to learn feature structure representations of sentences.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity