University of Cambridge > Talks.cam > Language Technology Lab Seminars > Cross-lingual Learning 2.0

Cross-lingual Learning 2.0

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mohammad Taher Pilehvar.

I survey the last decade of work in cross-lingual transfer of NLP models. I briefly present the methods, but also the assumptions that have been made about the resources available to us when developing models for low-resource languages. I will discuss three trends: a) Cross-lingual learning is applied to more complex tasks, including frame semantics, discourse parsing, machine translation, and question answering. b) We make fewer and fewer assumptions. c) We have started exploring synergies between multiple transfer problems, potentially leading to “universal” models. Finally, I will discuss work in progress in our group to guide or regularize multi-task cross-lingual learning using auxiliary loss from readily available data.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity