University of Cambridge > Talks.cam > Machine Learning @ CUED > Generalization in Learning

Generalization in Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

It is nowadays common to evaluate supervised learning algorithms by their generalization abilities, or, in other words, out-of-sample performance. In this context generalization bounds provide a solid theoretical ground for analysis of existing algorithms and development of new ones. By contrast to the supervised learning, the situation in unsupervised learning is much more obscure. Up to the extent that if we are given two reasonable solutions (e.g., two possible segmentations of an image) we are not able to give a well founded answer, which one is better.

In my talk I will show that it is possible to define and analyze generalization abilities of unsupervised learning approaches similar to the way it is done in supervised learning. To support this approach by formal analysis I will derive PAC -Bayesian generalization bounds for density estimation. To demonstrate the approach in practice I will derive and apply PAC -Bayesian generalization bounds in the context of co-clustering.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity