University of Cambridge > Talks.cam > Microsoft Research Cambridge, public talks > Bandits with Switching Costs: T^{2/3} Regret

Bandits with Switching Costs: T^{2/3} Regret

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending

Consider the adversarial two-armed bandit problem in a setting where the player incurs a unit cost each time he switches actions. We prove that the player’s T-round regret in this setting (i.e., his excess loss compared to the better of the two actions) is T (up to a log term). In the corresponding full-information problem, the minimax regret is known to grow at a slower rate of T{1/2} . The difference between these two rates indicates that learning with bandit feedback (i.e. just knowing the loss from the player’s action, not the alternative) can be significantly harder than learning with full-information feedback. It also shows that without switching costs, any regret-minimizing algorithm for the bandit problem must sometimes switch actions very frequently. The proof is based on an information-theoretic analysis of a loss process arising from a multi-scale random walk.

(Joint work with Ofer Dekel, Jian Ding and Tomer Koren, to appear in STOC 2014 available at http://arxiv.org/abs/1310.2997)

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity