University of Cambridge > Talks.cam > Inference Group > Bayesian learning of visual chunks by human observers

Bayesian learning of visual chunks by human observers

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Philip Sterne.

Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower level features into higher level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. Based on Bayesian model comparison, we developed an ideal learner that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning, but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input.

This talk is part of the Inference Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity