University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Analysis and applications of structural-prior-based total variation regularization for inverse problems

Analysis and applications of structural-prior-based total variation regularization for inverse problems

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact INI IT.

VMVW02 - Generative models, parameter learning and sparsity

Structural priors and joint regularization techniques, such as parallel level set methods and joint total variation, have become quite popular recently in the context of variational image processing. Their main application scenario are particular settings in multi-modality/multi-spectral imaging, where there is an expected correlation between different channels of the image data. In this context, one can distinguish between two different approaches for exploiting such correlations: Joint reconstruction techniques that tread all available channels equally and structural prior techniques that assume some ground truth structural information to be available. This talk focuses on a particular instance of the second type of methods, namely structural total-variation-type functionals, i.e., functionals which integrate a spatially-dependent pointwise function of the image gradient for regularization. While this type of methods has been shown to work well in practical applications, some of their analytical properties are not immediate. Those include a proper definition for BV functions and non-smooth a-priory data as well as existence results and regularization properties for standard inverse problem settings. In this talk we address some of these issues and show how they can partially be overcome using duality. Employing the framework of functions of a measure, we define structural-TV-type functionals via lower-semi-continuous relaxation. Since the relaxed functionals are, in general, not explicitly available, we show that instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of the relaxation is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. The talk concludes with proof-of-concept numerical examples. This is joint work with M. Hintermüller and K. Papafitsoros (both from the Weierstrass Institute Berlin)

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity