University of Cambridge > Talks.cam > Computer Laboratory Wednesday Seminars > The Magic of Machine Learning and Classifier ensembles

The Magic of Machine Learning and Classifier ensembles

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Rafal Mantiuk.

Deep learning neural networks (Convolutional Neural Networks (CNN)) have dominated the landscape of machine learning and pattern recognition for over a decade now, at least in vision, speech and language recognition. While numerous bespoke CNN models compete for the top places in popular world-wide contests, these classifiers are far from ideal. To raise their accuracy further, ideas and algorithms from the classical pattern recognition may prove useful. In this line we consider combining classifiers into an ensemble. The aim is to offer a more accurate and robust classification decision compared to that of a single classifier. For a successful ensemble, the individual classifiers must be as diverse and as accurate as possible. While diversity has been a focus for a long time now, the combination rule of the individual votes has often been marginalised. This talk will introduce classifier ensembles along with some combination rules. Using a MATLAB demo we will demonstrate the importance of the diversity of the individual classifiers in the ensemble and the merit of choosing a suitable combination rule.

This talk is part of the Computer Laboratory Wednesday Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity