University of Cambridge > Talks.cam > CUED Control Group Seminars > Gradient methods for huge-scale optimization problems

Gradient methods for huge-scale optimization problems

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Rachel Fogg.

We consider a new class of huge-scale problems, the problems with sparse gradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of the iterations, which total cost depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions.

We show that the updating technique can be efficiently coupled with the simplest gradient methods. Similar results can be obtained for a new non- smooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments and discuss extensions of this technique.

Biography: Yurii Nesterov is a professor at the Catholic University of Louvain, Belgium, where he is a member of the Center for Operations Research and Econometrics (CORE). He is the author of 4 monographs and more than 80 refereed papers in leading optimization journals. He was awarded with the Dantzig Prize 2000 given by SIAM and the Mathematical Programming Society (for research having a major impact on the field of mathematical programming), the John von Neumann Theory Prize 2009 given by INFORMS , the Charles Broyden prize 2010 (for the best paper in Optimization Methods and Software journal), and the Honorable Francqui Chair (University of Li├Ęge, 2011-2012). The main direction of his research is the development of efficient numerical methods for convex and nonconvex optimization problems supported by a global complexity analysis. The most important results are obtained for general interior-point methods (theory of self-concordant functions), fast gradient methods (smoothing technique), and global complexity analysis of the second-order schemes (cubic regularization of the Newton’s method).

This talk is part of the CUED Control Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2017 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity