University of Cambridge > Talks.cam > CUED Speech Group Seminars > Long Sequence-to-Sequence Summarization: Efficient Transformer Models & Complementary Techniques

Long Sequence-to-Sequence Summarization: Efficient Transformer Models & Complementary Techniques

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Kate Knill.

This talk will be on zoom.

Abstract: Transformer-based models have achieved state-of-the-art results in a wide range of tasks including document summarization. Typically these systems are trained by fine-tuning a large pre-trained model to the target task. One issue with these transformer-based models is that they do not scale well in terms of memory and compute requirements as the input length grows. Thus, for long document summarization, it can be challenging to train or fine-tune these models. In this talk, first, we will cover some recent efficient transformer models for sequence-to-sequence tasks, including the motivation behind the design choice as well as their performance on summarization tasks. Second, the talk will cover alternative techniques that are complementary to efficient architectures. The talk will discuss CUED systems that were successful in the Spotify Podcast Summarization Challenge 2020.

Bio: Potsawee Manakul is a 2nd-year PhD student supervised by Prof. Mark Gales, in the Speech Group, Department of Engineering, University of Cambridge. His primary research interests include text summarization, summary assessment, and broader natural and spoken language processing. He obtained B.A. and M.Eng. degrees from the Univerisity of Cambridge, where he studied information and computer engineering.

Related papers:

[1] “Long-Span Dependencies in Transformer-based Summarization Systems” (ACL2021), Link: https://arxiv.org/abs/2105.03801

[2] “CUED_speech at TREC 2020 Podcast Summarisation Track”, Link: https://arxiv.org/abs/2012.02535

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity