BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Austerity in MCM - Land : Cutting the computational Budget - Max W
 elling\, University of Amsterdam
DTSTART:20130327T140000Z
DTEND:20130327T150000Z
UID:TALK43686@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:Will MCMC survive the “Big Data revolution”? Current MCMC 
 methods for posterior inference\, compute the likelihood of a model twice 
 for every data-­‐case in order to make a single binary decision: to acc
 ept or reject a proposed parameter value.  Compare this with stochastic gr
 adient descent that uses O(1) computations per iteration. In this talk I w
 ill discuss two MCMC algorithms that cut the computational budget of an MC
 MC update. The first algorithm\, “stochastic gradient Langevin dynamics
 ” (and its successor “stochastic gradient Fisher scoring”) performs 
 updates based on stochastic gradients and ignore the Metropolis-­‐Hasti
 ngs step altogether.  The second algorithm uses an approximate Metropolis-
 ­‐Hastings rule where accept/reject decisions are made with high (but n
 ot perfect) confidence based on sequential hypothesis tests. We argue that
  for any finite sampling window\, we can choose hyper-­‐parameters (ste
 psize\, confidence level) such that the extra bias introduced by these alg
 orithms is more than compensated by the reduction in variance due to the f
 act that we can draw more samples.    We anticipate a new framework where 
 bias and variance contributions to the sampling error a optimally traded-
 ­‐off.
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
