BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Cambridge MedAI Seminar - March 2025 - Dr Michail Mamalakis and Jo
 shua Rothwell
DTSTART:20250327T114500Z
DTEND:20250327T130000Z
UID:TALK229735@talks.cam.ac.uk
CONTACT:Hannah Clayton
DESCRIPTION:Sign up on Eventbrite: https://www.eventbrite.co.uk/e/cambridg
 e-medai-seminar-series-tickets-1301806461169?aff=oddtdtcreator\n\nJoin us 
 for the *Cambridge AI in Medicine Seminar Series*\, hosted by the *Cancer 
 Research UK Cambridge Centre* and the *Department of Radiology at Addenbro
 oke's*. This series brings together leading experts to explore cutting-edg
 e AI applications in healthcare—from disease diagnosis to drug discovery
 . It's a unique opportunity for researchers\, practitioners\, and students
  to stay at the forefront of AI innovations and engage in discussions shap
 ing the future of AI in healthcare.\n\nThis month's seminar will be held o
 n *Thursday 27 March 2025\, 12-1pm at the Jeffrey Cheah Biomedical Centre 
 (Main Lecture Theatre)\, University of Cambridge* and *streamed online via
  Zoom*. A light lunch from Aromi will be served from 11:45. The event will
  feature the following talks:\n\n*_Explainable and Interpretable AI: Build
 ing Trust and Uncovering Patterns in Healthcare and Neuroscience_ - Dr Mic
 hail Mamalakis\, Research Associate\, Department of Psychiatry\, Universit
 y of Cambridge*\n\nDr Michail Mamalakis is a research scientist at the Uni
 versity of Cambridge\, specializing in AI\, Machine Learning\, Explainable
  AI and Computer Vision for biomedical applications. His work focuses on e
 xplainable AI (XAI) for integrating imaging\, genomics\, and phenotyping d
 ata in neuroscience and clinical decision-making. He has collaborated with
  leading institutions\, including Oxford\, Sheffield\, and Cambridge\, on 
 projects in brain tumors\, Alzheimer’s\, cardiac arrhythmias and pulmona
 ry hypertension. His research spans AI-driven biomarker discovery\, uncert
 ainty estimation\, attributional interpretability in funtional and structu
 ral imaging and mechanistic interpretability in protein language models an
 d large language models. Currently\, he develops multi-modal AI frameworks
  for Alzheimer’s prediction and glioblastoma analysis contributing to hi
 gh-impact projects like EBRAINS 2.0. \n\n*Abstract*: Explainability is a c
 ritical factor in enhancing the trustworthiness and acceptance of artifici
 al intelligence (AI) in healthcare\, where decisions have a direct impact 
 on patient outcomes. Despite significant advancements in AI interpretabili
 ty\, clear guidelines on when and to what extent explanations are required
  in medical applications remain insufficient. In this talk\, I will provid
 e guidance on the need for explanations in AI applications within healthca
 re. I will discuss possible explainable AI frameworks that can be used to 
 identify new patterns and offer insights through explainable AI methods. T
 hese approaches have the potential to uncover new biomarkers and novel pat
 terns relevant to the applications of interest. Finally\, I will present s
 ome basic examples from neuroscience research to illustrate these concepts
 .  \n\n*_Retrospective evaluation and comparison of state-of-the-art deep 
 learning breast cancer risk prediction algorithms_ - Joshua Rothwell\, PhD
  Student\, Department of Radiology\, University of Cambridge School of Cli
 nical Medicine*\n\nJosh is an MBBS/PhD student\, researching and evaluatin
 g commercial mammography AI tools for the detection and prediction of brea
 st cancer.\n\n*Abstract*: Breast ‘interval’ cancers present between sc
 reening examinations and have poorer prognoses compared to screen detected
  cancers. Risk prediction tools can identify women that are at increased r
 isk of developing cancer\, and may therefore benefit from supplemental ima
 ging or increased frequency screening\, to detect cancers earlier and impr
 ove patient outcomes.This talk focuses on the retrospective evaluation of 
 two state-of-the-art deep learning risk prediction algorithms\, attempting
  to quantify potential cancer detection rates if implemented into the NHS 
 Breast Screening Programme and discern the characteristics of misclassifie
 d cancers.\n\nThis is a hybrid event so you can also join via Zoom:\n\nhtt
 ps://zoom.us/j/99050467573?pwd=UE5OdFdTSFdZeUtIcU1DbXpmdlNGZz09\n\nMeeting
  ID: 990 5046 7573 and Passcode: 617729\n\n\n\nWe look forward to your par
 ticipation! If you are interested in getting involved and presenting your 
 work\, please email Ines Machado at im549@cam.ac.uk\n\nFor more informatio
 n about this seminar series\, see: https://www.integratedcancermedicine.or
 g/research/cambridge-medai-seminar-series/
LOCATION:Jeffrey Cheah Biomedical Centre (Main Lecture Theatre)\, Universi
 ty of Cambridge
END:VEVENT
END:VCALENDAR
