University of Cambridge > Talks.cam > Machine Learning Journal Club > Fast Visual Tracking By Temporal Consensus

Fast Visual Tracking By Temporal Consensus

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Philip Sterne.

A.H. Gee and R. Cipolla, ``Fast visual tracking by temporal consensus’’, Image and Vision Computing, 14(2):105-114, 1996.

This is a somewhat dated paper with interesting points on doing visual tracking of head pose. They use a very simplistic and inaccurate feature (pixel/corner) tracker but nevertheless obtain very good pose recovery results.

The paper can be downloaded here:

http://mi.eng.cam.ac.uk/reports/svr-ftp/gee_tr207.ps.Z

Abstract At the heart of every model-based visual tracker lies a pose estimation routine. Recent work has emphasized the use of least-squares techniques which employ all the available data to estimate the pose. Such techniques are, however, susceptible to the sort of spurious measurements produced by visual feature detectors, often resulting in an unrecoverable tracking failure. This paper investigates an alternative approach, where a minimal subset of the data provides the pose estimate, and a robust regression scheme selects the best subset. Bayesian inference in the regression stage combines measurements taken in one frame with predictions from previous frames, eliminating the need to further filter the pose estimates. The resulting tracker performs very well on the difficult task of tracking a human face, even when the face is partially occluded. Since the tracker is tolerant of noisy, computationally cheap feature detectors, frame-rate operation is comfortably achieved on standard hardware.

This talk is part of the Machine Learning Journal Club series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity