BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Navigating Privacy Risks in Language Models - Peter Kairouz -- Goo
 gle
DTSTART:20240716T140000Z
DTEND:20240716T150000Z
UID:TALK219175@talks.cam.ac.uk
CONTACT:Nic Lane
DESCRIPTION:The emergence of large language models (LLMs) presents signifi
 cant opportunities in content generation\, question answering\, and inform
 ation retrieval. Nonetheless\, training\, fine-tuning\, and deploying thes
 e models entails privacy risks. This talk will address these risks\, outli
 ning privacy principles inspired by known LLM vulnerabilities when handlin
 g user data.  We demonstrate how techniques like federated learning and us
 er-level differential privacy (DP) can systematically mitigate many of the
 se risks at the cost of increased computation. In scenarios where only mod
 erate-to-weak user-level DP is achievable\, we propose a strong (task-and-
 model-agnostic) membership inference attack that allows us to quantify ris
 k by estimating the actual privacy leakage (empirical epsilon) accurately 
 in a single training run. The talk will conclude with a few projections an
 d compelling research directions.
LOCATION:West 2\, West Hub (https://www.westcambridgehub.uk/visit)
END:VEVENT
END:VCALENDAR
