University of Cambridge > Talks.cam > Engineering Safe AI > Deep Reinforcement Learning from Human Preferences

Deep Reinforcement Learning from Human Preferences

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrià Garriga Alonso.

How do you teach an algorithm to do a backflip or play a game where rewards are sparse? In this seminar we will discuss how algorithms can learn from human preferences as opposed to from pre-specified goal functions.

Removing the need for humans to write goal functions is important because getting them slightly wrong could lead to dangerous behaviour. Here this is only used to learn physical behaviours, but one can imagine that it could apply to learning moral values as well.

We will be looking at the paper ‘Deep Reinforcement Learning from Human Preferences’ (Christiano et. al., 2017). We will discuss the model used and experiments in three domains: simulated robotics, Atari arcade games and novel behaviours.

Link to paper: https://arxiv.org/abs/1706.03741

Slides: https://valuealignment.ml/talks/2017-11-15-deeprl-human-prefs.pdf

This talk is part of the Engineering Safe AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity