University of Cambridge > Talks.cam > Machine Learning @ CUED > Bayesian Inference with Kernels

Bayesian Inference with Kernels

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

An embedding of probability distributions into a reproducing kernel Hilbert space (RKHS) has been introduced: like the characteristic function, this provides a unique representation of a probability distribution in a high dimensional feature space. This representation forms the basis of an inference procedure on graphical models, where the likelihoods are represented as RKHS functions. The resulting algorithm is completely nonparametric: all aspects of the model are represented implicitly, and learned from a training sample. Both exact inference on trees and loopy BP on pairwise Markov random fields are demonstrated.

Kernel message passing can be applied to general domains where kernels are defined, handling challenging cases such as discrete variables with huge domains, or very complex, non-Gaussian continuous distributions. In experiments, the approach outperforms state-of-the-art techniques in a cross-lingual document retrieval task and a camera rotation estimation problem. Finally, time permitting, a more general kernelized Bayes’ law will be described, in which a prior distribution embedding is updated to provide a posterior distribution embedding. This last approach makes weaker assumptions on the underlying distributions, but is somewhat more complex to implement.

Joint work with Danny Bickson, Kenji Fukumizu, Carlos Guestrin, Yucheng Low, Le Song

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity