University of Cambridge > Talks.cam > NLIP Seminar Series > Exploring and Controlling Social Values in Large Language Models through Role-Playing 

Exploring and Controlling Social Values in Large Language Models through Role-Playing 

Add to your list(s) Download to your calendar using vCal

  • UserPaul Röttger (Oxford University) World_link
  • ClockFriday 20 January 2023, 12:00-13:00
  • HouseComputer Lab, SS03.

If you have a question about this talk, please contact Michael Schlichtkrull.

Abstract:

Social values are a key factor in human decision-making. Some people, for example, oppose the death penalty while others support it, and there is no single objective truth. Large language models are pre-trained on texts authored by many different people with different social values. But when prompted to answer an ethical question or complete a subjective task, model responses will necessarily align with some social values, and not others. This leads to two questions that I want to answer in my research: 1) What social values are reflected in model behaviour? 2) How can we control these values, and by extension model behaviour? In my talk, I will introduce role-playing as a framework for exploring these questions, differentiating between generic roles that models play by default, and specific roles that we ask them to play, for example based on sociodemographic attributes. I will discuss requirements for successful role-playing, including role stability, internal and external alignment, as well as the limitations of role-playing. Lastly, I will present initial role-playing experiments for hate speech detection, as a highly subjective task.

Bio:

Paul Röttger is a final-year DPhil student at the University of Oxford, working on natural language processing. In his thesis, he focused on evaluating and improving hate speech detection models, adapting language models to language change, and managing subjectivity in data annotation. His main research interest now is in exploring and controlling the behaviour of large language models in relation to social values, as part of a larger goal to make models more helpful and less harmful.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity