University of Cambridge > Talks.cam > ML@CL Seminar Series > Progress Towards Understanding Generalization in Deep Learning

Progress Towards Understanding Generalization in Deep Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact .

Passcode: 381314

There is, as yet, no satisfying theory explaining why common learning algorithms, like those based on stochastic gradient descent, generalize in practice on overparameterized neural networks. I will discuss various approaches that have been taken to explaining generalization in deep learning, and identify some of the barriers these approaches faced. I will then discuss my recent work on information-theoretic and PAC Bayesian approaches to understanding generalization in noisy variants of SGD . In particular, I will highlight how we can take advantage of conditioning to obtain sharper data and distribution-dependent generalization measures. I will also briefly touch upon my work on properties of the optimization landscape and some of the challenges we face incorporating these insights into the theory of generalization.

This talk is part of the ML@CL Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity