University of Cambridge > Talks.cam > Machine Learning @ CUED > Optimal Tag Sets for Automatic Image Annotation

Optimal Tag Sets for Automatic Image Annotation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

short talk

Automatic Image Annotation seeks to assign relevant words (e.g. ``jungle’’, ``boat’’, ``trees’‘) to images that describe the actual content found in the images without intermediate manual labelling. Current approaches are largely based on categorization, and treat the tags independently, so an annotation (jungle,trees) is just as plausible as (jungle,snow). In this talk I will introduce a new form of the Continuous Relevance Model (the BS-CRM) to capture the correlation between keywords and apply a priority beam search algorithm to find a near optimal set of mutually correlated keywords for an image. This novel approach provides a formal and consistent method for finding an optimal set of tags for an image by considering multiple hypotheses for the identity of the keyword set via the beam search algorithm. Furthermore by limiting the width of the beam, one is able to avoid the combinatorial explosion associated with enumerating and evaluating all possible keyword sets for an image. This approach also makes the contribution of examining the performance gains for the CRM and BS-CRM models under both Gaussian and Laplacian kernels for the representation of the image feature distributions. Extensive evaluation demonstrates the effectiveness of the approach in refining the set of keywords assigned to images.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity