University of Cambridge > Talks.cam > Machine Learning @ CUED > Not so naive Bayesian classification

Not so naive Bayesian classification

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Machine learning is classically conceived as search through a hypothesis space for a hypothesis that best fits the training data. In contrast, naive Bayes performs no search, extrapolating an estimate of a high-order conditional probability by composition from lower-order conditional probabilities. In this talk I show how this searchless approach can be generalised, creating a family of learners that provide a principled method for controlling the bias/variance trade-off. At one extreme very low variance can be achieved as appropriate for small data. Bias can be decreased with larger data in a manner that ensure Bayes optimal asymptotic error. These algorithms havethe desirable properties of
  • training time that is linear with respect to training set size,
  • learning rom a single pass through the data,
  • allowing incremental learning,
  • supporting parallel and anytime classification,
  • providing direct prediction of class probabilities,
  • supporting direct handling of missing values, and
  • robust handling of noise.

Despite being generative, they deliver classification accuracy competitive with state-of-the-art discriminative techniques.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity