Compositional Features and Feedforward Neural Networks for High Dimensional Problems
- π€ Speaker: Wei Kang (Naval Postgraduate School)
- π Date & Time: Tuesday 16 November 2021, 16:30 - 17:00
- π Venue: Seminar Room 1, Newton Institute
Abstract
Deep learning has had many impressive empirical successes in science and industries. On the other hand, the lack of theoretical understanding of the field has been a large barrier to the adoption of the technology. In this talk, I will discuss some compositional features of high dimensional problems and their mathematical properties that shed light on the question why deep learning works for high dimensional problems. It is widely observed in science and engineering that complicated and high dimensional information input-output relations can be represented as compositions of functions with low input dimensions. Their compositional structures can be effectively represented using layered directed acyclic graphs (layered DAGs). Based on the layered DAG formulation, an algebraic framework and approximation theory are developed for compositional functions including neural networks. The theory leads to the proof of several complexity/approximation error bounds of deep neural networks for problems of regression and dynamical systems.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Wei Kang (Naval Postgraduate School)
Tuesday 16 November 2021, 16:30-17:00