University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Online Learning and Online Convex Optimisation

Online Learning and Online Convex Optimisation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Elre Oldewage.

In online learning, data arrives sequentially, and model parameters are updated at each step. This is in contrast to batch training, where all data is available at once. Recently, online learning has large-scale applications such as online web ranking and online advertisement placement, and is closely related to continual learning. The field of online learning itself is well-established, with a lot of theory.

We will closely follow “Online Learning and Online Convex Optimisation” by Shalev-Shwartz (2011) (up to and including Section 2.5). We will see how important convexity is, and analyse the regret of some well-known algorithms such as Follow-The-Leader, Follow-The-Regularised-Leader, and Online Gradient Descent.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity