University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Automating stochastic gradient methods with adaptive batch sizes

Automating stochastic gradient methods with adaptive batch sizes

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact INI IT.

VMVW01 - Variational methods, new optimisation techniques and new fast numerical algorithms

This talk will address several issues related to training neural networks using stochastic gradient methods.  First, we'll talk about the difficulties of training in a distributed environment, and present a new method called centralVR for boosting the scalability of training methods.  Then, we'll talk about the issue of automating stochastic gradient descent, and show that learning rate selection can be simplified using “Big Batch” strategies that adaptively choose minibatch sizes.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity