University of Cambridge > > Logic and Semantics Seminar (Computer Laboratory) > Approximating Labelled Markov Processes by Averaging

Approximating Labelled Markov Processes by Averaging

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Bjarki Holm.

Markov processes with continuous state spaces or continuous time evolution or both arise naturally in many areas of computer science: robotics, performance evaluation, modelling and simulation for example. For discrete systems there was a pioneering treatment of probabilisitic bisimulation and logical characterizationdue to Larsen and Skou [LS91]. The continuous case, however, was neglected for a time. For a little over a decade there has been significant activity among computer scientists as it came to be realized that ideas from process algebra, like bisimulation and the existence of a modal characterization, would be useful for the study of such systems.

In [BDEP97] continuous-state Markov processes with labels to capture interactions were christened labelled Markov processes (LMPs). There is a vast literature on timed systems, hybrid systems, robotics and control theory that also refer to such systems. In [DGJP00, DGJP03 ] a theory of approximation for LMPs was initiated and was refined and extended in [DD03, DDP03 ]. Finding finite approximations is vital to give a computational handle on such systems. These techniques were adapted to Markov decision processes (MDPs) and applied to find good estimates of value functions [FPP05]. The previous work was characterized by rather intricate proofs that did not seem to follow from basic ideas in any straightforward way. For example, the logical characterization of (probabilistic) bisimulation proved first in [DEP98] requires subtle properties of analytic spaces and rather painful constructions [Eda99]. Proofs of basic results in approximation theory also seemed to be more difficult than they should be.

In recent work we take an entirely new approach, in some ways “dual” to the normal view of probabilistic transition systems. We think of a Markov process as a transformer of functions defined on it, rather than as a transformer of the state. Thus, instead of working directly with a Markov kernel \tau(s,A) that takes a state s to a probability distribution over the state space, we think of a Markov process as transforming a function f into a new function \int f(s’)\tau(s, ds’) over the state space. This is the probabilistic analogue of working with predicate transformers, a point of view advocated by Kozen [Koz85] in a path-breaking early paper on probabilistic systems and logic.

This new way of looking at things leads to three new results:

  1. It is possible to define bisimulation on general spaces – not just on analytic spaces – and show that it is an equivalence relation with easy categorical constructions. The logical characterization of bisimulation can also be done generally, and with no complicated measure theoretic arguments.
  2. A new and flexible approach to approximation based on averaging can be given. This vastly generalizes and streamlines the idea of using conditional expectations to compute approximation [DDP03].
  3. It is possible to show that there is a minimal bisimulation equivalent to a process obtained as the limit of finite approximants.

There are a number of interesting categorical aspects to this work which I will emphasize in the talk. This is joint work with Philippe Chaput, and with Vincent Danos and Gordon Plotkin.


  • [BDEP97] R. Blute, J. Desharnais, A. Edalat, and P. Panangaden. Bisimulation for labelled Markov processes. In Proceedings of the Twelfth IEEE Symposium On Logic In Computer Science, Warsaw, Poland., 1997.
  • [DD03] Vincent Danos and Josee Desharnais. Labeled Markov Processes: Stronger and faster approximations. In Proceedings of the 18th Symposium on Logic in Computer Science, Ottawa, 2003. IEEE .
  • [DDP03] Vincent Danos, Josee Desharnais, and Prakash Panangaden. Conditional expectation and the approximation of labelled Markov processes. In Roberto Amadio and Denis Lugiez, editors, CONCUR 2003 – Concurrency Theory, volume 2761 of Lecture Notes In Computer Science, pages 477–491. Springer-Verlag, 2003.
  • [DEP98] J. Desharnais, A. Edalat, and P. Panangaden. A logical characterization of bisimulation for labelled Markov processes. In Proceedings of the 13th IEEE Symposium On Logic In Computer Science, Indianapolis, pages 478–489. IEEE Press, June 1998.
  • [DGJP00] J. Desharnais, V. Gupta, R. Jagadeesan, and P. Panangaden. Approximation of labeled Markov processes. In Proceedings of the Fifteenth Annual IEEE Symposium On Logic In Computer Science, pages 95–106. IEEE Computer Society Press, June 2000.
  • [DGJP03] J. Desharnais, V. Gupta, R. Jagadeesan, and P. Panangaden. Approximating labeled Markov processes. Information and Computation, 184(1):160–200, July 2003.
  • [Eda99] Abbas Edalat. Semi-pullbacks and bisimulation in categories of Markov processes. Mathematical Structures in Computer Science, 9(5):523–543, 1999.
  • [FPP05] Norm Ferns, Prakash Panangaden, and Doina Precup. Metrics for Markov decision processes with infinite state spaces. In Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, pages 201–208, July 2005.
  • [Koz85] D. Kozen. A probabilistic PDL . Journal of Computer and Systems Sciences, 30(2):162–178, 1985.
  • [LS91] K. G. Larsen and A. Skou. Bisimulation through probablistic testing. Information and Computation, 94:1–28, 1991.

This talk is part of the Logic and Semantics Seminar (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2017, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity