University of Cambridge > Talks.cam > Language Technology Lab Seminars > Rethinking Benchmarking in AI

Rethinking Benchmarking in AI

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic.

The current benchmarking paradigm in AI has many issues: benchmarks saturate quickly, are susceptible to overfitting, contain exploitable annotator artifacts, have unclear or imperfect evaluation metrics, and do not measure what we really care about. I will talk about my work in trying to rethink the way we do benchmarking in AI, specifically in natural language processing, focusing mostly on the recently launched Dynabench platform.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity