University of Cambridge > Talks.cam > Engineering Safe AI > Ambitious Value Learning

Ambitious Value Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrià Garriga Alonso.

This week we read the “Ambitious Value Learning” series by Rohin Shah (and others). This is the introduction and posts under “Ambitious Value Learning” at https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc . You should read it before the session.

Ambitious value learning is the “traditional” AI safety research agenda: attempting to learn (or write down) all of human values, to later give them to an AI to maximise. If you think of your AI as long-term goal-oriented, this is almost the only way to solve the problem. In this series of posts Rohin challenges this problem framing, arguing that we may need to step out of it in order to implement safe AGI . The first part describes this framing, and some of the problems that have been found in it over the years.

As usual, there will be free pizza. The first half hour is for stragglers to finish reading.

Invite your friends to join the mailing list (https://lists.cam.ac.uk/mailman/listinfo/eng-safe-ai), the Facebook group (https://www.facebook.com/groups/1070763633063871) or the talks.cam page (https://talks.cam.ac.uk/show/index/80932). Details about the next meeting, the week’s topic and other events will be advertised in these places.

This talk is part of the Engineering Safe AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity