University of Cambridge > Talks.cam > Language Technology Lab Seminars > Relational knowledge in vector spaces

Relational knowledge in vector spaces

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Edoardo Maria Ponti.

In this talk a number of unsupervised approaches for learning vectors that capture relational information will be described. The main motivation behind this is that the amount of information that can be encoded in a word embedding is limited, and constrained by the similarity structure imposed by the typical methods based on co-occurrence statistics. For example, the relations holding between lion and zebra, movie theater and popcorn or dog and porch are all seemingly intuitive for us. But it is reasonable to assume that an explicit encoding capturing the subtle nature of these relations would be more appropriate than “simply” manipulating their word vectors. While such encodings may be acquired from external resources (e.g., knowledge bases like ConceptNet or lexical taxonomies like WordNet), these would be inherently limited, among others, by their symbolic nature. Finally, in addition to methods for learning relational knowledge, experimental results will be discussed, showing their benefit in lexical semantics tasks, text classification, and for modeling collocations.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity