University of Cambridge > Talks.cam > NLIP Seminar Series > Towards Trustworthy Natural Language Processing

Towards Trustworthy Natural Language Processing

Add to your list(s) Download to your calendar using vCal

  • UserJasmijn Bastings (Google Brain) World_link
  • ClockFriday 18 November 2022, 12:00-13:00
  • HouseVirtual (Zoom).

If you have a question about this talk, please contact Michael Schlichtkrull.

Abstract:

Recent NLP models can achieve incredibly high accuracies, but how do we know whether we can trust them, or whether they are trustworthy? First we will establish what these terms mean, and what the umbrella of trustworthy NLP covers. We’ll then discuss what desiderata a trustworthy NLP system should meet, before we look at recent work done at Google towards this end, touching on the topics of interpretability, privacy and robustness.

Bio:

Jasmijn Bastings (she/her) is a researcher at Google Brain Amsterdam, having joined Google in Berlin late 2019. She holds a PhD from ILLC , University of Amsterdam, on the topic of Interpretable and Linguistically-informed Deep Learning for NLP . Recently, Jasmijn has been focusing on explainability, privacy and robustness. She is also one of the organizers of BlackBoxNLP: Analyzing and interpreting neural networks for NLP , a workshop co-located with EMNLP 2022 .

Topic: NLIP Seminar Time: Nov 18, 2022 12:00 PM London

Join Zoom Meeting https://cl-cam-ac-uk.zoom.us/j/91073515866?pwd=UnJmTER6dmZLeWpPOUo0VUJBOGxYQT09

Meeting ID: 910 7351 5866 Passcode: 646960

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity