University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex

On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact INI IT.

STSW01 - Theoretical and algorithmic underpinnings of Big Data

Many new theoretical challenges have arisen in the area of gradient-based optimization for large-scale statistical data analysis, driven by the needs of applications and the opportunities provided by new hardware and software platforms. I discuss several related, recent results in this area: (1) a new framework for understanding Nesterov acceleration, obtained by taking a continuous-time, Lagrangian/Hamiltonian/symplectic perspective, (2) a discussion of how to escape saddle points efficiently in nonconvex optimization, and (3) the acceleration of Langevin diffusion.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity