University of Cambridge > Talks.cam > Machine Learning @ CUED > Parameter estimation in deep learning architectures: Two new insights.

Parameter estimation in deep learning architectures: Two new insights.

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Note: time is 12 noon (*not* 11am)

The talk has two new insights. First, I will show that it is possible to train most deep learning approaches -regardless of the choice of regularization, architecture, algorithms and datasets – by learning only a small number of the parameters and predicting the rest with nonparametric methods. Often, this approach makes it possible to learn only 10% of the parameters without a drop in accuracy. Second, I will introduce a new method (LAP) for parameter estimation in loopy undirected probabilistic graphical models, also known as Markov random fields, that is linear in the number of cliques, embarrassingly parallel, data efficient, and statistically efficient.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity