University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > The troublesome kernel — On AI generated hallucinations in deep learning for inverse problems

The troublesome kernel — On AI generated hallucinations in deep learning for inverse problems

Add to your list(s) Download to your calendar using vCal

  • UserNina Maria Gottschling (Cambridge Centre for Analysis, University of Cambridge, Cambridge Centre for Analysis)
  • ClockTuesday 30 November 2021, 09:00-10:00
  • HouseSeminar Room 2, Newton Institute.

If you have a question about this talk, please contact nobody.

MDL - Mathematics of deep learning

There is overwhelming empirical evidence that Deep Learning (DL) leads to unstable methods in applications ranging from image classification and computer vision to voice recognition and automated diagnosis in medicine. Recently, a similar instability phenomenon has been discovered when DL is used to solve certain problems in computational science, namely, inverse problems in imaging. The talk presents a comprehensive mathematical analysis explaining the many facets of the instability phenomenon in DL for inverse problems. These instabilities in particular also include false positives and negatives as well as AI hallucinations. Furthermore, the results indicate how training typically encourages AI hallucinations and instabilities.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity