University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Representation, optimization and generalization properties of deep neural networks

Representation, optimization and generalization properties of deep neural networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact INI IT.

STSW04 - Future challenges in statistical scalability

Deep neural networks have improved the state-of-the-art performance for prediction problems across an impressive range of application areas. This talk describes some recent results in three directions. First, we investigate the impact of depth on representational properties of deep residual networks, which compute near-identity maps at each layer, showing how their representational power improves with depth and that the functional optimization landscape has the desirable property that stationary points are optimal. Second, we study the implications for optimization in deep linear networks, showing how the success of a family of gradient descent algorithms that regularize towards the identity function depends on a positivity condition of the regression function. Third, we consider how the performance of deep networks on training data compares to their predictive accuracy, we demonstrate deviation bounds that scale with a certain “spectral complexity,” and we compare the behavior of these bounds with the observed performance of these networks in practical problems.

Joint work with Steve Evans, Dylan Foster, Dave Helmbold, Phil Long, and Matus Telgarsky.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity