University of Cambridge > Talks.cam > Machine Learning @ CUED > Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design

Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Peter Orbanz.

Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi- armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-con dence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity