The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model
- đ¤ Speaker: Yuting Wei (University of Pennsylvania)
- đ Date & Time: Monday 10 November 2025, 11:30 - 12:10
- đ Venue: Seminar Room 1, Newton Institute
Abstract
In this talk, we investigate model robustness in reinforcement learning (RL) to reduce the sim-to-real gap in practice. We adopt the framework of distributionally robust Markov decision processes (RMDPs), aimed at learning a policy that optimizes the worst-case performance when the deployed environment falls within a prescribed uncertainty set around the nominal MDP . Despite recent efforts, the sample complexity of RMD Ps remained mostly unsettled regardless of the uncertainty set in use. It was unclear if distributional robustness bears any statistical consequences when benchmarked against standard RL. Assuming access to a generative model that draws samples based on the nominal MDP , we provide a near-optimal characterization of the sample complexity of RMD Ps when the uncertainty set is specified via either the total variation (TV) distance or χ2 divergence. The algorithm studied here is a model-based method called distributionally robust value iteration, which is shown to be near-optimal for the full range of uncertainty levels.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Yuting Wei (University of Pennsylvania)
Monday 10 November 2025, 11:30-12:10