University of Cambridge > Talks.cam > Language Technology Lab Seminars > Spoken Language Understanding, with and without Pre-training

Spoken Language Understanding, with and without Pre-training

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic.

Spoken language understanding (SLU) tasks involve mapping from speech audio signals to semantic labels. Given the complexity of such tasks, good performance might be expected to require large labeled datasets, which are difficult to collect for each new task and domain. Recent work on self-supervised speech representations has made it feasible to consider learning SLU models with limited labeled data, but it is not well understood what pre-trained models learn and how best to apply them to downstream tasks. In this talk I will describe recent work that (1) begins to build a better understanding of the information learned by pre-trained speech models and (2) explores a spoken language understanding task, spoken named entity recognition, with limited labeled data. Along the way we also explore the question of how access to a speech recognizer helps (or doesn’t help) spoken NER , as well as other ways of improving low-resource spoken NER other than using pre-trained models.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity