University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club

Computational Neuroscience Journal Club

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Jake Stroud.

Please join us for our fortnightly journal club online via zoom where two presenters will jointly present a topic together.

Zoom information: https://us02web.zoom.us/j/81138977348?pwd=d0RlQ0QwTHJydzdyR2ttZW93MU5Sdz09

Meeting ID: 811 3897 7348

Passcode: 095299

The next topic is ‘An overview of linear gaussian models and dimensionality reduction techniques’:

Factor analysis, principal components analysis, mixtures of gaussian clusters, Kalman filter models, hidden Markov models, slow feature analysis, linear discriminant analysis, canonical correlations analysis, undercomplete independent component analysis, and linear regression are all linear models and methods used throughout neuroscience to make sense of high-dimensional data. Phew, what a long list of methods – it can be difficult to make sense of the differences and similarities between all of these since they were developed across fields and over time. We present multiple papers that attempt to unify these methods in a common framework. We will first discuss Roweis and Ghahramani’s “A unifying review of linear Gaussian models” which unifies many of these methods as unsupervised learning under basic generative models. We then present Turner and Sahani’s “A maximum-likelihood interpretation for slow feature analysis,” which establishes a probabilistic interpretation of the slow feature analysis (SFA) algorithm for time series and subsequently develops novel extensions of SFA . Finally, we present Cunningham and Ghahramani’s “Linear Dimensionality Reduction: Survey, Insights, and Generalizations” which approaches these methods as optimization programs over matrix manifolds, addressing and analyzing the suboptimality of certain eigenvector approaches.These frameworks help connect the multitude of linear models and dimensionality reduction techniques, can suggest new developments and approaches, and can provide a way to choose and distinguish between the different options.

Papers: a. Roweis, Sam, and Zoubin Ghahramani. “A unifying review of linear Gaussian models.” Neural computation 11.2 (1999): 305-345. https://cs.nyu.edu/roweis/papers/NC110201.pdf

b. Turner R, Sahani M. A maximum-likelihood interpretation for slow feature analysis. Neural Comput. 2007;19(4):1022-1038.doi:10.1162/neco.2007.19.4.1022. http://www.gatsby.ucl.ac.uk/turner/Publications/turner-and-sahani-2007a.pdf

c. J. Cunningham and Z. Ghahramani. “Linear Dimensionality Reduction: Survey, Insights, and Generalizations”, Journal of Machine Learning Research, 16,2859-2900, 2015. https://jmlr.org/papers/volume16/cunningham15a/cunningham15a.pdf

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity