BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Statistics
SUMMARY:Data-driven calibration of linear estimators with
minimal penalties\, with an application to multi-t
ask regression - Sylvain Arlot\, École Normale Sup
érieure\, Paris
DTSTART;TZID=Europe/London:20111104T160000
DTEND;TZID=Europe/London:20111104T170000
UID:TALK32896AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/32896
DESCRIPTION:This talk tackles the problem of selecting among s
everal linear estimators in non-parametric regress
ion\; this includes model selection for linear reg
ression\, the choice of a regularization parameter
in kernel ridge regression or spline smoothing\,
the choice of a kernel in multiple kernel learning
\, the choice of a bandwidth for Nadaraya-Watson e
stimators\, and the choice of k for k-nearest neig
hbors regression.\n\nWe propose a new algorithm wh
ich first estimates consistently the variance of t
he noise\, based upon the concept of minimal penal
ty which was previously introduced in the context
of model selection. Then\, plugging our variance e
stimate in Mallows’ C_L penalty is proved to lead
to an algorithm satisfying an oracle inequality. S
imulation experiments show that the proposed algor
ithm often improves significantly existing calibra
tion procedures such as 10-fold cross-validation o
r generalized cross-validation.\n\nWe then provide
an application to the kernel multiple ridge regre
ssion framework\, which we refer to as multi-task
regression. The theoretical analysis of this probl
em shows that the key element appearing for an opt
imal calibration is the covariance matrix of the n
oise between the different tasks. We present a new
algorithm for estimating this covariance matrix\,
based upon several single-task variance estimatio
ns. We show\, in a non-asymptotic setting and unde
r mild assumptions on the target function\, that t
his estimator converges towards the covariance mat
rix. Then\, plugging this estimator into the corre
sponding ideal penalty leads to an oracle inequali
ty. We illustrate the behaviour of our algorithm o
n synthetic examples.\n\n\nThis talk is based on t
wo joint works with Francis Bach and Matthieu Soln
on:\n\nS. Arlot\, F. Bach. Data-driven Calibration
of Linear Estimators with Minimal\nPenalties. arX
iv:0909.1884\n\nM. Solnon\, S. Arlot\, F. Bach. Mu
lti-task Regression using Minimal Penalties.\narXi
v:1107.4512\n\n
LOCATION:MR12\, CMS\, Wilberforce Road\, Cambridge\, CB3 0W
B
CONTACT:Richard Samworth
END:VEVENT
END:VCALENDAR