University of Cambridge > Talks.cam > Microsoft Research Machine Learning and Perception Seminars > Austerity in MCM - Land : Cutting the computational Budget

Austerity in MCM - Land : Cutting the computational Budget

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending

Will MCMC survive the “Big Data revolution”? Current MCMC methods for posterior inference, compute the likelihood of a model twice for every data-­‐case in order to make a single binary decision: to accept or reject a proposed parameter value. Compare this with stochastic gradient descent that uses O(1) computations per iteration. In this talk I will discuss two MCMC algorithms that cut the computational budget of an MCMC update. The first algorithm, “stochastic gradient Langevin dynamics” (and its successor “stochastic gradient Fisher scoring”) performs updates based on stochastic gradients and ignore the Metropolis-­‐Hastings step altogether. The second algorithm uses an approximate Metropolis-­‐Hastings rule where accept/reject decisions are made with high (but not perfect) confidence based on sequential hypothesis tests. We argue that for any finite sampling window, we can choose hyper-­‐parameters (stepsize, confidence level) such that the extra bias introduced by these algorithms is more than compensated by the reduction in variance due to the fact that we can draw more samples. We anticipate a new framework where bias and variance contributions to the sampling error a optimally traded-­‐off.

This talk is part of the Microsoft Research Machine Learning and Perception Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity