University of Cambridge > > Language Technology Lab Seminars > Learning and testing compositionality

Learning and testing compositionality

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Edoardo Maria Ponti.

While sequence-to-sequence (seq2seq) models have shown remarkable generalisation power across several natural language tasks, their construct of solutions is argued to be less compositional than human-like generalisation. In this talk, I will discuss our attempts to narrow this gap.

In the first part of the talk, I will introduce the notion of compositionality and explain why it is crucial for artificial learners to master it if they want to talk and think like people. I will then present three different strategies we followed to bias seq2seq models towards more compositional solutions.

First, I will talk about Attentive Guidance (AG), a new mechanism to direct a seq2seq model equipped with attention to find more compositional solutions. Models trained with AG come up with solutions that, in some cases, fit the training and testing distributions equally well.

While AG is effective, it has the problem that needs an extra supervision signal. As a remedy, I will present sequence-to-attention, a new architecture that we specifically designed to exploit attention to find compositional patterns in the input without the need of extra-supervision. The solutions found by the model are highly interpretable, allowing easy analysis of both the types of solutions that are found and potential causes for mistakes.

Lastly, I will present some preliminary results of a second architecture where we are trying to disentangle content- and position-based representations in the attention mechanism of a seq2seq. This again helps with interpretability but also with extrapolating to longer sequences than the ones seen by the model during training. Furthermore, it gives us the chance to design a more human-like version of (positional) attention.

In the second part of the talk, I will argue that currently, it is difficult to test for compositionality in neural networks. I will then present our compositional manifesto, a new battery of tests to assess the compositional abilities in seq2seq models. In particular, I will introduce five tests: Localism, Substitutivity, Productivity, Systematicity and Overgeneralisation. I will then “test the tests” using three instances of a seq2seq model: a recurrent-seq2seq, a convolutional-seq2seq, and a Transformer network.

To conclude, I will highlight new research directions relating to compositional learning where I aim to ground the learners in the visual world.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2019, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity