BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Computational Neuroscience Journal Club - Yul Kang and Jonathan So
DTSTART:20210406T140000Z
DTEND:20210406T153000Z
UID:TALK158728@talks.cam.ac.uk
CONTACT:Jake Stroud
DESCRIPTION:Please join us for our fortnightly journal club online via zoo
 m where two presenters will jointly present a topic together. The next top
 ic is ‘distributed distributional codes’ presented by Yul Kang and Jon
 athan So.\n\nZoom information: https://us02web.zoom.us/j/84958321096?pwd=d
 FpsYnpJYWVNeHlJbEFKbW1OTzFiQT09\n\nIt is clear from behavioural studies in
  a variety of settings that humans are able to not only take uncertainty i
 nto account\, but to do so in a near Bayes-optimal fashion. What is less c
 lear is how the brain represents uncertainty\, or how it performs computat
 ions with such representations. One competing theory is that the brain use
 s Distributed Distributional Codes (DDC) to represent probability distribu
 tions over quantities of interest. The DDC shares similarities with other 
 schemes that encode distributions in a population of neurons\, however\, t
 he DDC has some particularly appealing properties with regard to performin
 g computations for downstream tasks.\n\nIn this journal club we will begin
  with an overview of DDC representations\, and proceed to look at two spec
 ific applications of the DDC\; it’s application to inference and learnin
 g in hierarchical latent variable models\, and how it can be used to learn
  successor representations to allow efficient and flexible reinforcement l
 earning and planning in noisy\, partially observable environments.\n\n1. V
 ertes\, E. & Sahani\, M. (2018) Flexible and accurate inference and learni
 ng for deep generative models. Advances in Neural Information Processing S
 ystems\, 4166-4175. https://proceedings.neurips.cc/paper/2018/file/955cb56
 7b6e38f4c6b3f28cc857fc38c-Paper.pdf \n\n2. Vertes\, E. & Sahani\, M. (2019
 ) A neurally plausible model learns successor representations in partially
  observable environments. Advances in Neural Information Processing System
 s\, 13714-13724. http://papers.neurips.cc/paper/9522-a-neurally-plausible-
 model-learns-successor-representations-in-partially-observable-environment
 s.pdf \n
LOCATION:Online on Zoom
END:VEVENT
END:VCALENDAR
