University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club

Computational Neuroscience Journal Club

Add to your list(s) Download to your calendar using vCal

  • UserYashar Ahmadian and Wayne Soo
  • ClockTuesday 15 June 2021, 15:00-16:30
  • HouseOnline on Zoom.

If you have a question about this talk, please contact Jake Stroud.

Please join us for our fortnightly journal club online via zoom where two presenters will jointly present a topic together. The next topic is ‘Low rank RNNs’ presented by Yashar Ahmadian and Wayne Soo.

Zoom information: https://us02web.zoom.us/j/84958321096?pwd=dFpsYnpJYWVNeHlJbEFKbW1OTzFiQT09 Meeting ID: 841 9788 6178 Passcode: 659046

Biological neural networks have connectivity characterised by ordered structure alongside disorder or heterogeneity that cannot be accounted for by known neural features. The structured part of connectivity often takes the form of a low-rank matrix, while heterogeneity is often modeled by a random matrix with independent elements. On the other hand, tasks of low complexity can be implemented by recurrent neural networks (RNN) which exhibit low-dimensional dynamics, and influential paradigms for training RNNs in such tasks (e.g. FORCE learning and reservoir computing) by construction yield connectivity that is a sum of a random matrix and a low-rank one. The computational benefits of the random component of connectivity (which by itself can lead to chaotic dynamics) during learning or for task-performance is not clear.

The first two papers that we will present link the low-rank component of connectivity to low-dimensional dynamics, and use dynamic mean-field theory to systematically map the phase diagram of networks with such connectivity, when the two components are statistically independent vs. correlated, respectively. The third paper studies gradient based training of unrestricted RNNs with random initial connectivity in common neuroscience tasks, and shows that the resulting change in connectivity is low-rank. Moreover, they find a clear benefit for the random initial connectivity in speeding up training, and they provide theoretical insights for this finding by analytically studying learning in linear RNNs.

1) Mastrogiuseppe, F and Ostojic, S, Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks, Neuron 99, 609–623, August 8, 2018. https://www.sciencedirect.com/science/article/pii/S0896627318305439

2) Schuessler, F et al., Dynamics of random recurrent networks with correlated low-rank structure, PHYSICAL REVIEW RESEARCH 2 , 013111 (2020). https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.013111

3) Schuessler, F et al., The interplay between randomness and structure during learning in RNNs, NeurIPS 2020. https://arxiv.org/abs/2006.11036

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity