|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Bayesian Reinforcement Learning
If you have a question about this talk, please contact Colorado Reed.
Reinforcement learning (RL) is the problem of learning optimal behaviour in an initially unfamiliar Markov Decision Process (MDP) environment through interaction and evaluative feedback. Until recently, existing RL algorithms have relied on non-optimal exploration strategies to strike a balance between ‘exploiting’ current knowledge of the MDP to maximise expected returns, and ‘exploration’ actions which gain information on the MDP , to improve the return on exploitation actions in the future. Bayesian Reinforcement learning (BRL) is about capturing and dealing with uncertainty in MDP elements, where ‘classic RL’ does not. We focus on modelling uncertainty in an agent’s transition probabilities, often termed ‘model-based’ BRL . By planning in a belief space of transition probabilities, BRL implicitly resolves the classic RL ‘exploitation & exploration’ dilemma optimally. Computation is shown to be intractable in general, although approximations exist of which several key algorithms are presented.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsGraduate Union talks Cambridge University Medical Humanities Society CGC Advanced Technology Lectures
Other talksAntibiotic resistance and antibiotic alternatives: Looking towards the future Day 2 - Corporate Finance Theory Symposium 2015 Deciphering signaling specificity in development, one phosphate at a time Functional genomics for schistosomes: approaches to interrogate host-parasite interaction, and schistosomiasis-associated diseases. Prevention is Better than Cure: closing lecture by the Vice Chancellor tbc