COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

University of Cambridge > Talks.cam > Inference Group > A quick way to learn a mixture of exponentially many linear models

## A quick way to learn a mixture of exponentially many linear modelsAdd to your list(s) Download to your calendar using vCal - Geoffrey Hinton, Canadian Institute for Advanced Research & University of Toronto
- Monday 15 June 2009, 15:00-16:00
- TCM Seminar Room, Cavendish Laboratory, Department of Physics.
If you have a question about this talk, please contact David MacKay. Note unusual time Mixtures of linear models can be used to model data that lies on or near a smooth non-linear manifold. A proper Bayesian treatment can be applied to toy data to determine the number of models in the mixture and the dimensionality of each linear model but this neurally uninspired approach completely misses the main problem: Real data with many degrees of freedom in the manifold requires a mixture with an exponential number of components. It is quite easy to fit mixtures of 2^1000 linear models by using a few tricks: First, each linear model selects from a pool of shared factors using the selection rule that factors with negative values are ignored. Second, undirected linear models are used to simplify inference and the models are trained by matching pairwise statistics. Third, Poisson noise is used to implement L1 regularization of the activities of the factors. The factors are then threshold linear neurons with Poisson noise and their positive integer activities are very sparse. Preliminary results suggest that these exponentially large mixtures work very well as modules for greedy, layer-by-layer learning of deep networks. Even with one eye closed, they outperform Support Vector machines for recognizing 3-D images of objects from the NORB database. This talk is part of the Inference Group series. ## This talk is included in these lists:- All Cavendish Laboratory Seminars
- All Talks (aka the CURE list)
- Biology
- CSIC Research Talk
- Cambridge Neuroscience Seminars
- Cambridge talks
- Centre for Health Leadership and Enterprise
- Chris Davis' list
- Featured lists
- Guy Emerson's list
- Hanchen DaDaDash
- Inference Group
- Inference Group Summary
- Interested Talks
- Joint Machine Learning Seminars
- Life Science
- Life Sciences
- ME Seminar
- ML
- Machine Learning Summary
- Neurons, Fake News, DNA and your iPhone: The Mathematics of Information
- Neuroscience
- Neuroscience Seminars
- Neuroscience Seminars
- Required lists for MLG
- School of Physical Sciences
- Stem Cells & Regenerative Medicine
- TCM Seminar Room, Cavendish Laboratory, Department of Physics
- Thin Film Magnetic Talks
- dh539
- rp587
- yk373's list
Note that ex-directory lists are not shown. |
## Other listsQuestion and Answer with Stuart Corbridge BP Lectures 2012 Beyond Academia## Other talksThe role of the oculomotor system in visual attention and visual short-term memory Bullion or specie? The role of Spanish American silver coins in Europe and Asia throughout the 18th century Smuts, bunts and ergots Auxin and cytokinin regulation of root architecture - antagonism or synergy Finding meaning in English writing Computing knot Floer homology Animal Migration 'Ways of Reading, Looking, and Imagining: Contemporary Fiction and Its Optics' Single Cell Seminars (August) Cambridge-Lausanne Workshop 2018 - Day 1 The Chemistry of Planet Formation and the Making of Habitable Planets |