University of Cambridge > Talks.cam > Language Technology Lab Seminars > Unsupervised Domain Adaptation for Neural Search

Unsupervised Domain Adaptation for Neural Search

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic.

After textual information retrieval has stalled for many years, pre-trained transformer networks gave a big performance boost resulting in extremely better search results. However, so far these approaches require large amount of training data which is seldom available for many use-cases. In this talk, I will start with an overview of different neural search approach. I will then present BEIR , a benchmark that test neural search methods in an out-of-domain setting. As the benchmark reveals, many architecture are sensitive to domain shifts limiting their usefulness for many real word applications. To overcome this short-coming, we created Generative Pseudo Labeling (GPL), a method that transfers knowledge from slow, but robust architectures, to fast but domain-sensitive approaches, which results in highly improved search quality.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2022 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity