University of Cambridge > Talks.cam > Rainbow Group Seminars > Concept-based Interpretable Models for Affective Computing Applications

Concept-based Interpretable Models for Affective Computing Applications

Download to your calendar using vCal

If you have a question about this talk, please contact Hatice Gunes .

Talk abstract: In today’s era of intelligent connectivity, Affective Computing (AC) plays a vital role in enabling AI systems to understand and respond to human emotions. However, a key challenge persists: how can we design models that are both accurate and explainable? This talk explores how concept-level interpretability can be integrated into model design to make AC systems not only intelligent but also transparent. We introduce a family of concept-based AC frameworks that advance explainable and efficient affective AI across a range of applications, from facial expression recognition and conversational engagement estimation to video-based mental health assessment. Together, these works outline a pathway toward interpretable, trustworthy, and deployable affective AI for real-world impact.

Speaker bio: Xinyu Li is a final year doctoral student at the Behaviour AI Lab, at the School of Computing Science, University of Glasgow. His research focuses on Affective Computing, Explainable Artificial Intelligence(XAI), and Multimodal Machine Learning, with an emphasis on developing interpretable and trustworthy human-centered AI systems.

This talk is part of the Rainbow Group Seminars series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Β© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity