Tutorial: Generalization in Reinforcement Learning: From Foundations to New Frontiers
- 👤 Speaker: Csaba Szepesvári (University of Alberta)
- 📅 Date & Time: Wednesday 05 November 2025, 10:00 - 13:00
- 📍 Venue: Enigma Room, The Alan Turing Institute
Abstract
Reinforcement learning (RL) and optimal control share a deep intellectual heritage in addressing sequential decision-making under uncertainty. This tutorial develops a computer scientist’s perspective on RL theory—one that places generalization, sample efficiency, and computational tractability at the center of the analysis. A particular focus will be on the stylized setting of linear function approximation, which offers the best prospects for developing and understanding tractable algorithms. The tutorial will illustrate how this perspective shapes problem formulations, abstractions, and algorithmic insights through several representative results. It will conclude by considering how similar ideas might inform reasoning and planning in large language models, raising more questions than answers. The tutorial follows the new MIT Press textbook “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches”, available at www.marl-book.com.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Enigma Room, The Alan Turing Institute
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Csaba Szepesvári (University of Alberta)
Wednesday 05 November 2025, 10:00-13:00