University of Cambridge > Talks.cam > NLIP Seminar Series > Improving & Better Understanding Word Vector Representations

Improving & Better Understanding Word Vector Representations

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Tamara Polajnar.

Data-driven learning of distributional word vector representations is a technique of central importance in natural language processing. In this talk, we will explore several questions and their solutions that are aimed at improving and better understanding distributional word vectors. Can word vectors benefit from information stored in semantic lexicons? Can these word vectors look similar to features typically used in NLP ? Do the vector dimensions have certain meaning associated with them or are they uninterpretable? Is it necessary to develop word vectors using distributional context?

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity