University of Cambridge > Talks.cam > Statistics > Highly-Smooth Zero-th Order Online Optimization

Highly-Smooth Zero-th Order Online Optimization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Quentin Berthet.

The minimization of convex functions which are only available through partial and noisy information is a key methodological problem in machine learning. We consider online convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as online logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence of our upper-bounds on the degree of smoothness. In particular, we show that for infinitely differentiable functions, we recover essentially the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for convex and strongly-convex functions, with finite horizon and anytime algorithms.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity