Generalisation in neural networks
- đ¤ Speaker: Marton Havasi
- đ Date & Time: Wednesday 01 May 2019, 14:00 - 15:30
- đ Venue: Engineering Department, CBL Room BE-438
Abstract
It is not very well understood at the moment why large neural networks with more parameters than training data generalise well from training data to test data. This talk explores the main hurdles and potential future avenues to improve our understanding of these models. In particular, we are going to look at a few approaches from Statistical Learning Theory to prove generalisation properties of neural networks. First, we examine a more traditional approach to that bounds the capacity of learning models (VC dimension) followed by a review of the more recent approaches that utilise information theory to prove generalisation.
Series This talk is part of the Machine Learning Reading Group @ CUED series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Cambridge University Engineering Department Talks
- Centre for Smart Infrastructure & Construction
- Chris Davis' list
- Computational Continuum Mechanics Group Seminars
- custom
- Engineering Department, CBL Room BE-438
- Featured lists
- Guy Emerson's list
- Hanchen DaDaDash
- Inference Group Journal Clubs
- Inference Group Summary
- Information Engineering Division seminar list
- Interested Talks
- Machine Learning Reading Group
- Machine Learning Reading Group @ CUED
- Machine Learning Summary
- ML
- ndk22's list
- ob366-ai4er
- Quantum Matter Journal Club
- Required lists for MLG
- rp587
- School of Technology
- Simon Baker's List
- TQS Journal Clubs
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 01 May 2019, 14:00-15:30