Concept-based Interpretable Models for Affective Computing Applications
- π€ Speaker: Xinyu Li, University of Glasgow.
- π Date & Time: Thursday 23 October 2025, 14:00 - 14:40
- π Venue: FW011 - William Gates Building
Abstract
Talk abstract: In todayβs era of intelligent connectivity, Affective Computing (AC) plays a vital role in enabling AI systems to understand and respond to human emotions. However, a key challenge persists: how can we design models that are both accurate and explainable? This talk explores how concept-level interpretability can be integrated into model design to make AC systems not only intelligent but also transparent. We introduce a family of concept-based AC frameworks that advance explainable and efficient affective AI across a range of applications, from facial expression recognition and conversational engagement estimation to video-based mental health assessment. Together, these works outline a pathway toward interpretable, trustworthy, and deployable affective AI for real-world impact.
Speaker bio: Xinyu Li is a final year doctoral student at the Behaviour AI Lab, at the School of Computing Science, University of Glasgow. His research focuses on Affective Computing, Explainable Artificial Intelligence(XAI), and Multimodal Machine Learning, with an emphasis on developing interpretable and trustworthy human-centered AI systems.
Series This talk is part of the Rainbow Group Seminars series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge talks
- Chris Davis' list
- Department of Computer Science and Technology talks and seminars
- FW011 - William Gates Building
- Interested Talks
- J
- ndk22's list
- ob366-ai4er
- Rainbow Group Seminars
- rp587
- School of Technology
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Thursday 23 October 2025, 14:00-14:40