|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Bayesian Reinforcement Learning
If you have a question about this talk, please contact Colorado Reed.
Reinforcement learning (RL) is the problem of learning optimal behaviour in an initially unfamiliar Markov Decision Process (MDP) environment through interaction and evaluative feedback. Until recently, existing RL algorithms have relied on non-optimal exploration strategies to strike a balance between ‘exploiting’ current knowledge of the MDP to maximise expected returns, and ‘exploration’ actions which gain information on the MDP , to improve the return on exploitation actions in the future. Bayesian Reinforcement learning (BRL) is about capturing and dealing with uncertainty in MDP elements, where ‘classic RL’ does not. We focus on modelling uncertainty in an agent’s transition probabilities, often termed ‘model-based’ BRL . By planning in a belief space of transition probabilities, BRL implicitly resolves the classic RL ‘exploitation & exploration’ dilemma optimally. Computation is shown to be intractable in general, although approximations exist of which several key algorithms are presented.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsIndividual in the Labour Market Research Group Cambridge University Engineering Department Talks The Yerushah Lecture 2012
Other talksVHI Seminar: Is Just War Theory still relevant in the 21st Century? Sound Studies: Art, Experience, Politics Unmasking Philosophers’ Fiction: First-Person Reference, Monster Operators, and the Indexical/Nonindexical Distinction Long-distance agreement for person: Some new data from Icelandic Health Economics @ Cambridge seminar Inhibitory activities of short linear motifs underlie Hox interactome specificity in vivo.