University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club

Computational Neuroscience Journal Club

Add to your list(s) Download to your calendar using vCal

  • UserYul Kang and Jonathan So
  • ClockTuesday 06 April 2021, 15:00-16:30
  • HouseOnline on Zoom.

If you have a question about this talk, please contact Jake Stroud.

Please join us for our fortnightly journal club online via zoom where two presenters will jointly present a topic together. The next topic is ‘distributed distributional codes’ presented by Yul Kang and Jonathan So.

Zoom information: https://us02web.zoom.us/j/84958321096?pwd=dFpsYnpJYWVNeHlJbEFKbW1OTzFiQT09

It is clear from behavioural studies in a variety of settings that humans are able to not only take uncertainty into account, but to do so in a near Bayes-optimal fashion. What is less clear is how the brain represents uncertainty, or how it performs computations with such representations. One competing theory is that the brain uses Distributed Distributional Codes (DDC) to represent probability distributions over quantities of interest. The DDC shares similarities with other schemes that encode distributions in a population of neurons, however, the DDC has some particularly appealing properties with regard to performing computations for downstream tasks.

In this journal club we will begin with an overview of DDC representations, and proceed to look at two specific applications of the DDC ; it’s application to inference and learning in hierarchical latent variable models, and how it can be used to learn successor representations to allow efficient and flexible reinforcement learning and planning in noisy, partially observable environments.

1. Vertes, E. & Sahani, M. (2018) Flexible and accurate inference and learning for deep generative models. Advances in Neural Information Processing Systems, 4166-4175. https://proceedings.neurips.cc/paper/2018/file/955cb567b6e38f4c6b3f28cc857fc38c-Paper.pdf

2. Vertes, E. & Sahani, M. (2019) A neurally plausible model learns successor representations in partially observable environments. Advances in Neural Information Processing Systems, 13714-13724. http://papers.neurips.cc/paper/9522-a-neurally-plausible-model-learns-successor-representations-in-partially-observable-environments.pdf

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity