University of Cambridge > Talks.cam > Signal Processing and Communications Lab Seminars > Understanding Audio and Video at Google

Understanding Audio and Video at Google

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Prof. Ramji Venkataramanan.

Abstract: Google’s mission is to organise the world’s information and make it universally accessible and useful. An enormous chunk of the world’s information is in the form of video and audio, so systems that can efficiently index, search and understand these forms of content are crucial. In this talk, I’ll discuss research on video and audio understanding, including technologies for landmark detection, object recognition, cover-song detection and sound-effects search. I’ll also present some recent work done at Google on distributed training of deep neural networks, and its application to video analysis tasks.

Bio: Tom is a Research Scientist at Google. He currently works in Zurich on improving YouTube’s ContentID system. Previously he was part of the Machine Perception group in Google Research in Mountain View, CA, where he worked on content-based audio analysis for tasks like auditory scene understanding and music recommendation. Tom received a BA and MSci in Natural Sciences (specialising in Experimental and Theoretical Physics) from the University of Cambridge, and did a PhD on Computational Auditory Modes at the Centre for the Neural Basis of Hearing in the Department of Physiology, Development and Neuroscience in Cambridge.

This talk is part of the Signal Processing and Communications Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity