University of Cambridge > > Churchill CompSci Talks > Deep Learning in Practice

Deep Learning in Practice

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Matthew Ireland.

This talk will explore some of the problems that exist in the area of deep learning and some of the mathematical techniques that are used to overcome these issues. We will begin by examining how the vanishing gradient problem threatens the very existence of deep networks and then look at how researchers working over the past decade have managed to overcome these issues by using a number of distinct methods to drastically improve the rate of learning and ultimately, the resulting accuracy of these networks. We will investigate how using the cross-entropy cost function improves the learning speed and also show how using an alternative to sigmoid neurons can avoid the problem of neuron saturation. We will also analyse the motivation behind regularisation and show how it can be used to combat the problem of overfitting. These techniques are used by pioneering neural networks such as the winners of the Large Scale Visual Recognition Challenge and can result in networks that achieve human levels of performance in some tasks.

This talk is part of the Churchill CompSci Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2018, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity