University of Cambridge > > Giving Voice to Digital Democracies > Understanding and Automating Counterspeech

Understanding and Automating Counterspeech

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Stefanie Ullmann.

Please register online:

Online hate speech and the spread of misinformation continue to increase, notably exacerbated by the Covid-19 pandemic and recurring national shutdowns. According to a review published by The Alan Turing Institute in 2019, between 30-40% of people in the UK have witnessed harmful content and 10-20% have personally experienced abuse online. Moreover, recent statistics released by the government on hate crime show that the number of hate crimes in England and Wales has increased steadily over the last six years. Research suggests that there is a link between online hate speech and real-life acts of discrimination and violence. And while awareness of the problem increases on societal as well as governmental and corporate levels, current approaches are still insufficient as the spread of hate speech and disinformation continues. Most social media platforms still follow a mainly reactive approach to harmful content and even if more content were deleted, it remains questionable whether removal or blocking is the best way to engage with this problem. At the same time, companies like Apple, Microsoft and Amazon report an increasing amount of verbal abuse directed at their virtual personal assistants Siri, Cortana and Alexa. This has equally brought into question how voice agents should counter this form of toxic communication.

An increasing body of research from different fields such as linguistics, philosophy of language, media and communication studies, law and policy, computer science and information engineering has been analysing the role and functions of counterspeech as a means of successfully combatting hate speech, verbal abuse and misinformation. Among the first to study counterspeech, in 2015/2016, in the context of harmful language online were researchers at the Dangerous Speech Project. In 2017, Facebook began to launch and promote several counterspeech initiatives to fight hate speech on its social media platform. In descriptive and experimental studies, researchers have found that counterspeech on social media is not only a suitable way of engaging with harmful speech but is also shown to have a positive effect on bystanders, and other users are more likely to comment, like or reproduce someone else’s counterspeech. More recently, experts from the fields of computer science and information engineering have begun to apply computational methods to the processing, generation and evaluation of counterspeech.

This workshop brings together experts from different fields in academia (philosophy of language, sociology, law, media and communication studies, peacebuilding and conflict studies, computer science), political activism and industry to further the conversation and address some of the most pressing questions as well as computational approaches to counterspeech.


Amalia Álvarez-Benjumea (Max Planck Institute for Research on Collective Goods)

Babak Bahador (George Washington University)

Cathy Buerger (Dangerous Speech Project)

Joshua Garland (Santa Fe Institute)

Rae Langton (University of Cambridge)

Sina Laubenstein (No Hate Speech Movement Germany)

Punyajoy Saha (Indian Institute of Technology Kharagpur)

Erin Saltman (Global Internet Forum to Counter Terrorism)

Kenneth S. Stern (Bard Centre for the Study of Hate)

Nadine Strossen (New York Law School)

Lynne Tirrell (University of Connecticut)

Bertie Vidgen (Alan Turing Institute)

The schedule can be found here:

This talk is part of the Giving Voice to Digital Democracies series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity