University of Cambridge > Talks.cam > Mobile and Wearable Health Seminar Series > From Wearable Sensing to Contextual AI: An Egocentric Perspective

From Wearable Sensing to Contextual AI: An Egocentric Perspective

Download to your calendar using vCal

If you have a question about this talk, please contact Cecilia Mascolo .

https://cam-ac-uk.zoom.us/j/81168979243?pwd=q2VIAwupSBfNcTtBVjUez8pvwbBPwj.1

Abstract: The long-held vision of wearable computing is to move beyond simple activity tracking towards proactive, intelligent assistance. However, achieving true contextual understanding on resource-constrained devices like smart glasses remains a substantial challenge.

This talk charts a path from foundational wearable sensing to the future of contextual AI, framed through an egocentric perspective. It begins by discussing the evolution from traditional mobile sensing to the rich, multimodal data streams enabled by modern research platforms like Project Aria Glasses and large-scale egocentric datasets. The talk then examines the technical building blocks required for real-world contextual understanding, shifting from basic activity recognition to complex acoustic scene analysis and targeted speech enhancement that solves the “cocktail party problem” in social settings.

Finally, it connects these capabilities to the broader vision for a future where AI-powered eyewear can understand its wearer’s environment, model social context, and effectively serve as a digital extension of human memory and perception. Throughout, the presentation bridges the gap between academic research and product deployment, making the case that the convergence of egocentric sensing, on-device AI, and contextual understanding is poised to redefine how we interact with the world around us.

Bio: Chi Ian Tang is a Senior Research Scientist at Meta Reality Labs, working on the foundational AI that powers smart glasses. He bridges the gap between academic research and consumer product – a journey that began with his PhD at the University of Cambridge Mobile Systems Research Lab, where he developed novel self-supervised and continual learning methods for wearable sensing, and continued at Nokia Bell Labs, where he focused on multimodal analysis for longitudinal health insights.

Today at Meta Reality Labs, he tackles real-world perceptual challenges on resource-constrained devices, contributing to core audio AI capabilities and building on features like Conversation Focus. His work has been published extensively at top-tier venues including ICML , IMWUT, and ICASSP , pioneering approaches in self-supervised learning for mobile sensing. As an active member of the pervasive computing community, he regularly organises workshops and tutorials on advancing human sensing and serves as an Associate Editor for the ACM IMWUT journal. His long-term research goal is to close the gap between human perception and machine understanding, enabling wearable AI that can see, hear, and reason about the world as we do.

More information can be found at: https://iantang.co/

This talk is part of the Mobile and Wearable Health Seminar Series series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Š 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity