University of Cambridge > Talks.cam > Computational Neuroscience > A normative account of episodic memory in online learning over open model spaces

A normative account of episodic memory in online learning over open model spaces

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Guillaume Hennequin.

Both the human brain and artificial learning agents operating in real-world or comparably complex environments are faced with the problem of online model selection. The reason for this is that both the amount and dimensionality of the data and the dimensionality of the model space is huge or even infinite. In principle this can be handled: hierarchical Bayesian inference gives a principled method for model selection and it converges on the same posterior for both batch and online learning. However, maintaining a parameter posterior for each model in parallel has in general an even higher memory cost than storing the entire data set and is consequently clearly unfeasible. On the other hand, the sufficient statistic for one model will usually not be sufficient for the fitting of different kind of model meaning that the agent loses information with each model change. We propose that episodic memory can circumvent the challenge of limited memory-capacity online model selection by retaining a selected subset of data points. We design a method to compute the quantities necessary for model selection even when the data is discarded and only statistics of one (or few) learnt models are available. We demonstrate on a simple model that a limited-sized episodic memory buffer, when the content is optimised to retain data with statistics not matching the current representation, can resolve the fundamental challenges of online model selection.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2017 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity