Scaling Multi-Agent Reinforcement Learning to the Mean-Field Regime
- ๐ค Speaker: Batu Yardim (ETH Zรผrich)
- ๐ Date & Time: Wednesday 12 November 2025, 14:40 - 15:20
- ๐ Venue: Seminar Room 1, Newton Institute
Abstract
Reinforcement Learning (RL) has achieved remarkable success especially when combined with deep learning, however, scaling RL beyond the single-agent setting remains a major challenge. In particular, the “curse of many agents” hinders the application of RL to systems with thousands or even millions of interacting participants. Such large-scale problems arise naturally in domains like financial markets, auctions, traffic/resource management, and social systems, where optimal decision-making and computation quickly become intractable. We explore mean-field reinforcement learning (MF-RL) as a principled framework to address this challenge under the agent exchangeability assumption. Our work extends the theoretical foundations of MF-RL with an emphasis on computational aspects and realworld applicability. Specifically, we analyze mean-field approximation properties, study communication and coordination bottlenecks during learning, and examine the computational and statistical complexity of scaling RL to the mean-field regime. Finally, we highlight applications to large-scale incentive design and resource allocation, demonstrating how MF-RL can serve as a bridge between mean-field theory and practical multi-agent RL algorithms.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Batu Yardim (ETH Zรผrich)
Wednesday 12 November 2025, 14:40-15:20