Convergence of the actor-critic gradient flow for entropy regularised MDPs in general action spaces
- đ¤ Speaker: David Siska (University of Edinburgh)
- đ Date & Time: Thursday 13 November 2025, 16:30 - 17:10
- đ Venue: Seminar Room 1, Newton Institute
Abstract
We prove the stability and global convergence of a coupled actor-critic gradient flow for infinite-horizon and entropy-regularised Markov decision processes (MDPs) in continuous state and action space with linear function approximation under Q-function realisability. We consider a version of the actor critic gradient flow where the critic is updated using temporal difference (TD) learning while the policy is updated using a policy mirror descent method on a separate timescale. We demonstrate stability and exponential convergence of the actor critic flow to the optimal policy. Finally, we address the interplay of the timescale separation and entropy regularisation and its effect on stability and convergence. This is joint work with Denis Zorba and Lukasz Szpruch.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

David Siska (University of Edinburgh)
Thursday 13 November 2025, 16:30-17:10