University of Cambridge > Talks.cam > CCIMI Seminars > Stochastic variants of classical optimization methods, with complexity guarantees

Stochastic variants of classical optimization methods, with complexity guarantees

Add to your list(s) Download to your calendar using vCal

  • UserProfessor Coralia Cartis
  • ClockWednesday 01 May 2019, 14:00-15:00
  • HouseCMS, MR14.

If you have a question about this talk, please contact J.W.Stevens.

Optimization is a key component of machine learning application, as it helps with training of (neural net, nonconvex) models and parameter tuning. Classical optimization methods are challenged by the scale of machine learning applications and the lack of /cost of full derivatives, as well as the stochastic nature of the problem. On the other hand, the simple approaches that the machine learning community uses need improvement. Here we try to merge the two perspectives and adapt the strength of classical optimization techniques to meet the challenges of data science applications: from deterministic to stochastic problems, from typical to large scale. We propose a general algorithmic framework and complexity analysis that allows the use of inexact, stochastic and even possibly biased, problem information in classical methods for nonconvex optimization. This work is joint with Katya Scheinberg (Cornell), Jose Blanchet (Columbia) and Matt Menickelly (Argonne).

This talk is part of the CCIMI Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity