University of Cambridge > Talks.cam > Statistics > Bayesian inference for Markov processes with application to biochemical network dynamics

Bayesian inference for Markov processes with application to biochemical network dynamics

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Richard Samworth.

A number of interesting statistical applications require the estimation of parameters underlying a nonlinear multivariate continuous time Markov process model, using partial and noisy discrete time observations of the system state. Bayesian inference for this problem is difficult due to the fact that the discrete time transition density of the Markov process is typically intractable and computationally intensive to approximate. It turns out to be possible to develop MCMC algorithms which are exact, provided that one can simulate exact realisations of the process forwards in time. Such algorithms, often termed “likelihood free” or “plug-and-play” are very attractive, as they allow separation of the problem of model development and simulation implementation from the development of inferential algorithms. Such techniques break down in the case of perfect observation or high-dimensional data, but more efficient algorithms can be developed if one is prepared to deviate from the likelihood free paradigm, at least in the case of diffusion processes. The methods will be illustrated using examples from population dynamics and stochastic biochemical network dynamics.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity