University of Cambridge > Talks.cam > NLIP Seminar Series > Modelling implicit language learning with distributional semantics

Modelling implicit language learning with distributional semantics

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Tamara Polajnar.

In distributional semantics, words acquire their meaning by exploiting the statistical information that is inherent in their linguistic environment. A common criticism towards such representations is that they do not explicitly encode semantic features unlike more traditional models of semantic memory, questioning the cognitive relevance of such statistical mechanisms during language learning and processing. Here we show that distributional semantic models provide a good fit to data obtained from implicit language learning experiments on adults. In these experiments participants are introduced to novel non-words, which co-occur with already known words conditioned on underlying semantic regularities, such as concrete/abstract, animate/inanimate. Participants can implicitly learn such underlying semantic regularities, although whether they do so depends upon the nature of the conceptual distinction involved and their first language. Using datasets provided from four behavioural experiments, which employed different semantic manipulations, we obtained generalisation gradients that matched closely those of humans, capturing the effects of various conceptual distinctions and cross-linguistic differences.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity