University of Cambridge > Talks.cam > Rainbow Group Seminars > Multimodal classification of driver glance

Multimodal classification of driver glance

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marwa Mahmoud.

This talk has been canceled/deleted

This talk is a rehearsal for a talk that I will present at the international conference on Affective computing and intelligent interaction (ACII 2017).

Abstract— This paper presents a multimodal approach to in-vehicle classification of driver glances. Driver glance is a strong predictor of cognitive load and is a useful input to many applications in the automotive domain. Six descriptive glance regions are defined and a classifier is trained on video recordings of drivers from a single low-cost camera. Visual features such as head orientation, eye gaze and confidence ratings are extracted, then statistical methods are used to perform failure analysis and calibration on the visual features. Non-visual features such as steering wheel angle and indicator position are extracted from a RaceLogic VBOX system. The approach is evaluated on a dataset containing multiple 60 second samples from 14 participants recorded while driving in a natural environment. We compare our multimodal approach to separate unimodal approaches using both Support Vector Machine (SVM) and Random Forests (RF) classifiers. RF Mean Decrease in Gini Index is used to rank selected features which gives insight into the selected features and improves the classifier performance. We demonstrate that our multimodal approach yields significantly higher results than unimodal approaches. The final model achieves an average F1 score of 70.5% across the six classes.

This talk is part of the Rainbow Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

This talk is not included in any other list

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity