University of Cambridge > Talks.cam > Rainbow Interaction Seminars > Audiovisual Discrimination Between Laughter and Speech

Audiovisual Discrimination Between Laughter and Speech

Add to your list(s) Download to your calendar using vCal

  • UserStavros Petridis, Imperial College London
  • ClockThursday 04 December 2008, 14:15-15:15
  • HouseSS03.

If you have a question about this talk, please contact Laurel D. Riek.

In human – human interaction, information is communicated between the parties through various channels. Speech is usually the dominant channel but other cues like facial expressions, head gestures, hand gestures and non-linguistic vocalizations play an important role in communication as well. One of the most important non-linguistic vocalizations is laughter, which is reported to be the most frequently annotated non-verbal behaviour in meeting corpora. Laughter is a powerful affective and social signal since people very often express their emotion and regulate conversations by laughing. Although there are a few works on automatic laughter detection the focus of past research has mainly been on audio-based detection.

Inspired by the results in audiovisual speech recognition and audiovisual affect recognition, this talk presents an audiovisual approach to distinguishing spontaneous episodes of laughter from speech. Information is extracted simultaneously from the audio and visual channel and fused using decision and feature level fusion leading to improved performance over single- modal approaches. The first part of the talk investigates the performance of different combinations of audio/visual cues, facial expressions and head movements for video and spectral and prosodic features for audio. Once the most informative cues are found then, in the second part, two types of features are compared, static features extracted on an audio/video frame basis and temporal features extracted over a temporal window, describing the evolution of static features over time. This is followed by a comparison of the two different fusion levels, decision- and feature-level fusion. Finally, initial results on recognizing two types of laughter are presented.

This talk is part of the Rainbow Interaction Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity