|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Bayesian Reinforcement Learning
If you have a question about this talk, please contact Colorado Reed.
Reinforcement learning (RL) is the problem of learning optimal behaviour in an initially unfamiliar Markov Decision Process (MDP) environment through interaction and evaluative feedback. Until recently, existing RL algorithms have relied on non-optimal exploration strategies to strike a balance between ‘exploiting’ current knowledge of the MDP to maximise expected returns, and ‘exploration’ actions which gain information on the MDP , to improve the return on exploitation actions in the future. Bayesian Reinforcement learning (BRL) is about capturing and dealing with uncertainty in MDP elements, where ‘classic RL’ does not. We focus on modelling uncertainty in an agent’s transition probabilities, often termed ‘model-based’ BRL . By planning in a belief space of transition probabilities, BRL implicitly resolves the classic RL ‘exploitation & exploration’ dilemma optimally. Computation is shown to be intractable in general, although approximations exist of which several key algorithms are presented.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsEngineers Without Borders - Training Andrew Chamblin Memorial Lecture 2015 Health and Welfare Reading Group
Other talksSpecular surfaces improve colour constancy 3D Self-Sustaining Processes in Shear Flows with and without Surface Waves MiRNA-containing gene regulatory networks in development Hercules: The thinking person’s superhero A hundred years of visualizing molecules The mathematical structure of covariant loop quantum gravity and an application to primordial black holes decay