BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning Reading Group @ CUED
SUMMARY:Hessian-based Markov-Chain Monte Carlo Algorithms
- Tom Minka (Microsoft Research Ltd)
DTSTART;TZID=Europe/London:20090702T140000
DTEND;TZID=Europe/London:20090702T153000
UID:TALK18078AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/18078
DESCRIPTION:Hessian-based Markov-Chain Monte Carlo Algorithms
\nTom Minka Microsoft Research Cambridge\n----\nI
will talk about how to make Markov-chain Monte Car
lo run more efficiently in high-dimensional\, cont
inuous spaces. The idea is to shape the Markov tr
ansition density according to the local Hessian of
the probability density function. This leads to
a Hessian-based Metropolis-Hastings algorithm that
we call HMH. A naive implementation of this idea
would be quite expensive however\, requiring the
Hessian to be recomputed at each sample. Instead
I will describe how to incrementally update the He
ssian\, and how to get many samples from the same
Hessian (using the multiple-try Metropolis algorit
hm). The upshot is that\, given any function wher
e you can do efficient Hessian-based optimization\
, you can also do efficient sampling.\n\nJoint wor
k with Yuan (Alan) Qi.\n\nLink to paper:\nhttp://w
ww.cs.purdue.edu/homes/alanqi/papers.html\n
LOCATION:Engineering Department\, CBL Room 438
CONTACT:Shakir Mohamed
END:VEVENT
END:VCALENDAR