Explanations for medical artificial intelligence
- đ¤ Speaker: Rune Nyrup (Leverhulme Centre for the Future of Intelligence, Cambridge)
- đ Date & Time: Wednesday 17 October 2018, 13:00 - 14:30
- đ Venue: Seminar Room 2, Department of History and Philosophy of Science
Abstract
(Joint work with Diana Robinson)
AI systems are currently being developed and deployed for a variety medical purposes. A common objection to this trend is that medical AI systems risk being ‘black-boxes’, unable to explain their decisions. How serious this objection is remains unclear. As some commentators point out, human doctors too are often unable to properly explain their decisions. In this paper, we seek to clarify this debate. We (i) analyse the reasons why explainability is important for medical AI, (ii) outline some of the features that make for good explanations in this context, and (iii) compare how well humans and AI systems are able to satisfy these. We conclude that while humans currently have the edge, recent developments in technical AI research may allow us to construct medical AI systems which are better explainers than humans.
Series This talk is part of the CamPoS (Cambridge Philosophy of Science) seminar series.
Included in Lists
- All Talks (aka the CURE list)
- Cambridge talks
- CamPoS (Cambridge Philosophy of Science) seminar
- Department of History and Philosophy of Science
- Featured lists
- hc446
- History and Philosophy of Science long list
- jer64's list
- Philosophy Events
- Seminar Room 2, Department of History and Philosophy of Science
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 17 October 2018, 13:00-14:30