The troublesome kernel â On AI generated hallucinations in deep learning for inverse problems
- đ¤ Speaker: Nina Maria Gottschling (Cambridge Centre for Analysis, University of Cambridge, Cambridge Centre for Analysis)
- đ Date & Time: Tuesday 30 November 2021, 09:00 - 10:00
- đ Venue: Seminar Room 2, Newton Institute
Abstract
There is overwhelming empirical evidence that Deep Learning (DL) leads to unstable methods in applications ranging from image classification and computer vision to voice recognition and automated diagnosis in medicine. Recently, a similar instability phenomenon has been discovered when DL is used to solve certain problems in computational science, namely, inverse problems in imaging. The talk presents a comprehensive mathematical analysis explaining the many facets of the instability phenomenon in DL for inverse problems. These instabilities in particular also include false positives and negatives as well as AI hallucinations. Furthermore, the results indicate how training typically encourages AI hallucinations and instabilities.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 2, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Nina Maria Gottschling (Cambridge Centre for Analysis, University of Cambridge, Cambridge Centre for Analysis)
Tuesday 30 November 2021, 09:00-10:00