University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series >  Convexity and Other Shape priors for Single and Multiple Object Segmentation

Convexity and Other Shape priors for Single and Multiple Object Segmentation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact info@newton.ac.uk.

VMVW03 - Flows, mappings and shapes

Shape is a useful regularization prior for image segmentation. First we will talk about convexity shape prior for single object segmentation. In the context of discrete optimization, object convexity is represented as a sum of 3-clique potentials penalizing any 1-0-1 configuration on all straight lines. We show that these nonsubmodular interactions can be efficiently optimized using a trust region approach. While the quadratic number of all 3-cliques is prohibitively high, we designed a dynamic programming technique for evaluating and approximating these cliques in linear time. Our experiments demonstrate general usefulness of the proposed convexity constraint on synthetic and real image segmentation examples. Unlike standard second order length regularization, our convexity prior is scale invariant, does not have shrinking bias, and is virtually parameter-free. Segmenting multiple objects with convex shape prior presents its own challenges as distinct objects interact in non-trivial manner. We extend our work on single convex object optimization by proposing a mutli-object convexity shape prior for multi-label image segmentation. Next we consider simple shape priors, i.e. priors that can be optimized exactly with a single graph cut in the context of single object segmentation. Segmenting multiple objects with such simple shape priors presents its own challenges. We propose a new class of energies for segmentation of multiple foreground objects with a common simple shape prior. Our energy involves infinity constraints. For such energies standard expansion algorithm has no optimality guarantees and in practice gets stuck in bad local minima. Therefore, we develop a new move making algorithm, we call double expansion. In contrast to expansion, the new move allows each pixel to choose a label from a pair of new labels or keep the old label. This results in an algorithm with optimality guarantees and robust performance in practice. We experiment with several types of shape prior such as star-shape, compactness and a novel symmetry prior, and empirically demonstrate the advantage of the double expansion.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2017 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity