University of Cambridge > > Machine Learning @ CUED > Nonlinear Dynamics of Learning

Nonlinear Dynamics of Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These ``herding systems’’ combine learning and inference into one algorithm. They convert moments directly into a sequence of pseudo-samples without learning an explicit model. Using the “perceptron cycling theorem” we can show that Monte Carlo estimates based on these pseudo-samples converge at an optimal rate of O(1/T), due to infinite range negative auto-correlations. We show that the information content of these sequences, as measured by sub-extensive entropy, can grow as fast as K*log(N). In continuous spaces we can control an infinite number of moments by formulating herding in a Hilbert space. Also in this case sample averages over arbitrary functions in the Hilbert space will converge at an optimal rate of O(1/T). More generally, we advocate the application of the rich theoretical framework of nonlinear dynamical systems and chaos theory to statistical learning.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity