Comprehensive AI Services
- 👤 Speaker: Adrià Garriga Alonso (University of Cambridge)
- 📅 Date & Time: Wednesday 23 January 2019, 17:00 - 19:00
- 📍 Venue: Cambridge University Engineering Department, CBL Seminar room BE4-38
Abstract
A common critique of the motivation for AI safety is that it makes many assumptions that aren’t proven, or even likely to be true according to some. The most glaring one is that a general AI (AGI) will take the form of a recursively self-improving agent that tries to optimise for a long-term goal.
“Comprehensive AI services” (CAIS) is an answer to this critique by Eric Drexler from the Future of Humanity institute. He provides an alternative model for an AGI , which is that it emerges as a collection of AI-based software services of bounded scope and time to act. If this is a very likely scenario for the emergence of AGI , the priorities of current AI safety research should change.
We will discuss the CAIS model and its implications for safety research.
Reading list (as usual, we start reading at 5 pm, but the discussion starts at 5:30 pm)
- Rohin Shah’s summary of CAIS
- Richard Ngo’s summary of CAIS
- Very long, I suggest only reading a few choice sections. It’d be good to write down which sections you read, so we know which to ask you for a summary of and which to discuss together. Eric Drexler’s technical report: Reframing superintelligence: Comprehensive AI services as general intelligence
Series This talk is part of the Engineering Safe AI series.
Included in Lists
- Cambridge talks
- Cambridge University Engineering Department, CBL Seminar room BE4-38
- Chris Davis' list
- Engineering Safe AI
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 23 January 2019, 17:00-19:00