BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Cambridge Image Analysis Seminars
SUMMARY:The Structure-Adaptive Acceleration of Stochastic
Proximal Gradient Algorithms - Junqi Tang\, Univer
sity of Edinburgh
DTSTART;TZID=Europe/London:20190917T140000
DTEND;TZID=Europe/London:20190917T150000
UID:TALK129295AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/129295
DESCRIPTION:Stochastic gradient methods have become the de-fac
to techniques in data science\, signal processing
and machine learning\, due to their computational
efficiency in large-scale optimization problems. T
hroughout the past few years\, accelerated stochas
tic gradient algorithms are extensively studied an
d developed\, which are not only excellent numeric
ally\, but also worse-case optimal theoretically f
or convex and smooth objective functions. In many
real-world applications\, we often consider compos
ite optimization tasks where non-smooth regularize
rs are used for better estimation or generalizatio
n. Such regularizers usually enforce the solutions
to have low-dimensional structure\, such as spars
ity\, group-sparsity\, low-rank and piece-wise smo
othness. In this talk\, we present structure-adapt
ive variants of randomized optimization algorithms
\, including accelerated variance-reduced SGD\, an
d accelerated proximal coordinate descent\, for mo
re efficiently solving large-scale composite optim
ization problems. These algorithms are tailored to
exploit the low-dimensional structure of the solu
tion\, by judiciously designed restart schemes acc
ording to restricted strong-convexity property of
the objective function due to non-smooth regulariz
ation. The convergence analysis demonstrates that
our approach leads to provably improved iteration
complexity\, while we also validate the efficiency
of our algorithms numerically on large-scale spar
se regression tasks.
LOCATION:MR 14\, Centre for Mathematical Sciences
CONTACT:Jingwei Liang
END:VEVENT
END:VCALENDAR