University of Cambridge > Talks.cam > Computational Neuroscience > Structure in the randomness of trained recurrent neural networks

Structure in the randomness of trained recurrent neural networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Yul Kang.

Recurrent neural networks are an important class of models for explaining neural computations. Recently, there has been progress both in training these networks to perform various tasks, and in relating their activity to that recorded in the brain. Specifically, these models seem to capture the complexity of realistic neural responses. Despite this progress, there are many fundamental gaps towards a theory of these networks. What does it mean to understand a trained network? What types of regularities should we search for? How does the network reflect the task and its environment? I will present several examples of such regularities, in both the structure and the dynamics that arise through training.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity