University of Cambridge > Talks.cam > Statistics > Regularized linear autoencoders, the Morse theory of loss, and backprop in the brain

Regularized linear autoencoders, the Morse theory of loss, and backprop in the brain

Add to your list(s) Download to your calendar using vCal

  • UserJon Bloom (Broad Institute of MIT and Harvard)
  • ClockMonday 24 June 2019, 14:00-15:00
  • HouseMR12.

If you have a question about this talk, please contact Dr Sergio Bacallado.

When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. We prove that L2-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA . Finally, we consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of deep learning.

ICML 2019 .

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity