BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Making Large Language Models Safe: A Case Study of Llama2 - Pushka
 r Mishra - Lead AI Research Engineer\, Meta and Computer Science Part 1B S
 upervisor\, University of Cambridge
DTSTART:20240221T150500Z
DTEND:20240221T155500Z
UID:TALK209260@talks.cam.ac.uk
CONTACT:Ben Karniely
DESCRIPTION:Large Language Models (LLMs) have seen a lot of interest from 
 all over the world\, especially since ChatGPT became the fastest growing c
 onsumer internet app in history. As we enter a new era of possibilities wi
 th AI\, new challenges also present themselves. In July of 2023\, Meta ope
 n-sourced the largest language models to date\, making it one of the most 
 important moments in the development of AI. Llama2 was the first LLM of it
 s size and capabilities to be open-sourced\; both the base LLM as well as 
 a version fine-tuned for chat were released publicly for researchers to in
 dustry practitioners to leverage. In this talk\, I will recap the journey 
 of making Llama2 models safe and robust against misuse in hate speech\, mi
 sinformation\, etc. The talk will cover the technical details of how we de
 fined what is safety for an LLM\, the strategies we leveraged to train and
  fine-tune the models towards being safe\, and the evaluations we conducte
 d to verify that we had the level of safety we desired. I will also discus
 s the challenges that remain\, and what the possible directions to address
  those are.\n\nLink to join virtually: https://cam-ac-uk.zoom.us/j/8132246
 8305\n\nThis talk is not being recorded.
LOCATION:Lecture Theatre 1\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
