BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Computational Neuroscience Journal Club - Yul Kang and Wayne Soo
DTSTART:20201020T140000Z
DTEND:20201020T153000Z
UID:TALK153244@talks.cam.ac.uk
CONTACT:Jake Stroud
DESCRIPTION:Please join us for our fortnightly journal club online via zoo
 m where two presenters will jointly present a topic together.\n\nZoom info
 :\nhttps://us02web.zoom.us/j/81395647267?pwd=YW9Ub1YzTUpBbndzZXl4c0loU2pqU
 T09\nMeeting ID: 813 9564 7267\nPasscode: 839088\n\nThe next topic is 'rec
 urrent neural network (RNN) models of spatial navigation'.\n\nRNNs are sui
 ted for the study of circuits involved in spatial navigation because (1) u
 nlike feedforward networks\, they are capable of maintaining an internal s
 tate (e.g.\, current location of the agent/animal in the arena) and updati
 ng it\, which is necessary for navigation\, and (2) the brain regions invo
 lved in spatial navigation (hippocampal-entorhinal system) are known to ha
 ve recurrent connectivity that is important for maintaining their spatial 
 representation.\n\nIn Part 1\, we will look at some early and straightforw
 ard approaches that directly use RNN models. Kanitscheider et al. trained 
 their network to perform simultaneous location and mapping. Banino et al. 
 used an RNN to perform path integration\, and investigated the efficiency 
 of its resultant grid-like representations. Cueva et al. tackled a similar
  path integration task with their own RNN model\, which gave rise to vario
 us spatial-selective units such as grid and band cells.\n\nIn Part 2\, we 
 will cover recent proposals that push the boundary of the field by studyin
 g unsupervised training or incorporating more biological structure into th
 e model. Recanatesi et al. trained their network without supervision using
  predictive learning\, and offered an explanation why predictive learning 
 gives rise to low-dimensional representation of latent variables. Evans et
  al. incorporated known hippocampal-entorhinal structure into their model 
 and explained the observed pattern of the hippocampal-entorhinal activity 
 that deviates from what would be expected from a simple rule of the physic
 al space.\n\nPapers:\n\nKanitscheider\, I.\, Fiete\, I. Training recurrent
  networks to generate hypotheses about how the brain solves hard navigatio
 n problems. NeurIPS (2017). http://papers.nips.cc/paper/7039-training-recu
 rrent-networks-to-generate-hypotheses-about-how-the-brain-solves-hard-navi
 gation-problems\n\nBanino\, A.\, Barry\, C.\, Uria\, B.\, Blundell\, C.\, 
 Lillicrap\, T.\, Mirowski\, P.\, Pritzel\, A.\, Chadwick\, M.\, Degris\, T
 .\, Modayil\, J.\, Wayne\, G.\, Soyer\, H.\, Viola\, F.\, Zhang\, B.\, Gor
 oshin\, R.\, Rabinowitz\, N.\, Pascanu\, R.\, Beattie\, C.\, Petersen\, S.
 \, Sadik\, A.\, Gaffney\, S.\, King\, H.\, Kavukcuoglu\, K.\, Hassabis\, D
 .\, Hadsell\, R.\, Kumaran\, D. (2018). Vector-based navigation using grid
 -like representations in artificial agents. Nature https://dx.doi.org/10.1
 038/s41586-018-0102-6\n\nCueva\, C.\, Wang\, P.\, Chin\, M.\, Wei\, X. Eme
 rgence of functional and structural properties of the head direction syste
 m by optimization of recurrent neural networks. ICLR Spotlight (2020). htt
 ps://openreview.net/forum?id=HklSeREtPB\n\nRecanatesi\, S.\, Farrell\, M.\
 , Lajoie\, G.\, Deneve\, S.\, Rigotti\, M.\, Shea-Brown\, E. Predictive le
 arning extracts latent space representations from sensory observations. bi
 oRxiv (2019). https://dx.doi.org/10.1101/471987\n\nEvans\, T.\, Burgess\, 
 N. (2020). Replay as structural inference in the hippocampal-entorhinal sy
 stem. bioRxiv https://dx.doi.org/10.1101/2020.08.07.241547
LOCATION:Online on Zoom
END:VEVENT
END:VCALENDAR
