Solving mean-field stochastic control problems by using deep learning
- đ¤ Speaker: Nacira Agram (KTH Stockholm)
- đ Date & Time: Wednesday 20 April 2022, 11:30 - 12:15
- đ Venue: Seminar Room 1, Newton Institute
Abstract
The two famous approaches of solving stochastic control problems are Bellman’s dynamic programming and Pontryagin’s maximum principle. The dynamic programming method can be very efficient, but it works only if the system is Markov. The maximum principle, on the other hand, does not require that the system is Markov, but it has the drawback that it involves complicated backward stochastic differential equations. The mean-field systems are not Markovian a priori, but they can be made Markovian by adding to the system the Fokker-Planck equation for the law if the state. Then we can use the dynamic programming to study optimal control of of mean-field equations. Mean-field dynamics have a lot of applications, in this talk I will represent in particular two applications: Optimal energy consumption by the cortex neural network and initial investment problems. We will apply stochastic control methods to solve the problems. Furthermore, it is sometimes difficult to find explicit solutions mathematically and therefore, we will use numerical method to find them. We will use deep learning technics to solve special cases of the above discussed problems explicitly.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Nacira Agram (KTH Stockholm)
Wednesday 20 April 2022, 11:30-12:15