University of Cambridge > Talks.cam > Semantics Lunch (Computer Laboratory) > Probabilistic languages for inference

Probabilistic languages for inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Sam Staton.

The Bayesian approach to machine learning amounts to inferring posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions, observing variables, and sampling marginal distributions. Perhaps surprisingly, the probability monad is insufficient as a semantics for these kinds of programs; instead, we propose measure-theoretic distribution transformers as a semantics. We define a new set of combinators for distribution transformers, based on theorems in measure theory, and use these to obtain a rigorous semantics for our core calculus. Factor graphs are an important but low-level data structure in machine learning; they enable many efficient inference algorithms. We compile our core language to a small imperative language that in addition to the distribution transformer semantics also has a straightforward semantics as factor graphs, which we evaluate using an existing inference engine.

This talk is part of the Semantics Lunch (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity