BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning @ CUED
SUMMARY:Approximate Bayesian Inference for Large Scale Inv
erse Problems: A Computational Viewpoint - Prof. M
atthias Seeger (EPFL)
DTSTART;TZID=Europe/London:20110728T113000
DTEND;TZID=Europe/London:20110728T123000
UID:TALK32198AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/32198
DESCRIPTION:Tomographic sparse linear inverse problems are at
the core of medical imaging\n(MRI\, CT)\, astronom
y\, analysis of large scale networks\, and many ot
her\napplications. Viewed as a probabilistic graph
ical model\, they are characterized\nby a densely\
, non-locally coupled likelihood and a non-Gaussia
n sparsity\nprior. Even MAP estimation is challeng
ing for these models\, yet intense\nrecent researc
h has produced a range of competitive MAP algorith
ms. However\,\nfor these underdetermined problems\
, there are compelling reasons to move\nbeyond MAP
towards Bayesian inference and decision making\,
such as increased\nrobustness and interpretability
\, built-in mechanisms to fit linear or nonlinear\
nhyperparameters\, and adaptation of the measureme
nt operator by experimental\ndesign (active learni
ng). Unfortunately\, current approximate inference
\nalgorithms are many orders of magnitude too slow
to accept this challenge.\n\nA key strategy to na
rrow the gap to MAP is to find ways to reduce appr
oximate\ninference to subproblems of penalized lik
elihood structure. Using tools from\nconvex dualit
y\, I show how to achieve such iterative decouplin
g for a range\nof commonly used variational infere
nce relaxations. Resulting double loop\nalgorithms
are orders of magnitude faster than previous coor
dinate descent\n(or "message-passing") algorithms.
Not surprisingly\, approximate inference\nremains
harder than MAP\, but the added difficulties are
transparent and\namenable to fast techniques from
signal processing and numerical mathematics.\n\nTi
me permitting\, I will comment on work in progress
on integrating factorization\nassumptions and on
approximate Bayesian blind deconvolution.\n
LOCATION:Engineering Department\, CBL Room 438
CONTACT:Zoubin Ghahramani
END:VEVENT
END:VCALENDAR