BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Statistics
SUMMARY:Adaptive Piecewise Polynomial Estimation via Trend
Filtering - Ryan J. Tibshirani\, Carnegie Mellon
University
DTSTART;TZID=Europe/London:20140530T160000
DTEND;TZID=Europe/London:20140530T170000
UID:TALK52747AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/52747
DESCRIPTION:We discuss trend filtering\, a recently proposed t
ool of Kim et al. (2009) for nonparametric regress
ion. The trend filtering estimate is defined as th
e minimizer of a penalized least squares criterion
\, in which the penalty term sums the absolute kth
order discrete\nderivatives over the input points
. Perhaps not surprisingly\, trend filtering estim
ates appear to have the structure of kth degree sp
line functions\, with adaptively chosen knot point
s (we say "appear" here as\ntrend filtering estima
tes are not really functions over continuous domai
ns\, and are only defined over the discrete set of
inputs). This brings to mind comparisons to other
nonparametric regression tools\nthat also produce
adaptive splines\; in particular\, we compare tre
nd filtering to smoothing splines\, which penalize
the sum of squared derivatives across input point
s\, and to locally adaptive regression splines (Ma
mmen & van de Geer 1997)\, which penalize the tota
l\nvariation of the kth derivative.\n\nEmpirically
\, trend filtering estimates adapt to the local le
vel of smoothness much better than smoothing splin
es\, and further\, they exhibit a remarkable simil
arity to locally adaptive regression splines. Theo
retically\, (suitably tuned) trend filtering estim
ates converge to the true underlying function at t
he minimax rate over the class of functions whose
kth derivative is of bounded variation. The proof
of this result follows from an asymptotic pairing
of trend\nfiltering and locally adaptive regressio
n splines\, which have already been shown to conve
rge at the minimax rate (Mammen & van de Geer 1997
). At the core of this argument is a new result ty
ing together the fitted values of two lasso proble
ms that share the same outcome\nvector\, but have
different predictor matrices.
LOCATION:MR12\, Centre for Mathematical Sciences\, Wilberf
orce Road\, Cambridge
CONTACT:
END:VEVENT
END:VCALENDAR