University of Cambridge > Talks.cam > CUED Control Group Seminars > Low-Rank Inducing Norms with Optimality Interpretations

Low-Rank Inducing Norms with Optimality Interpretations

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Tim Hughes.

This talk is on optimization problems which are convex apart from a sparsity/rank constraint. These problems are often found in the context of compressed sensing, linear regression, matrix completion, low-rank approximation and many more. Since these problems are generally NP-hard, today, one of the most widely used methods for solving them is so-called nuclear norm regularization. Despite the nice probabilistic guarantees of this method, this approach often fails for problems with structural constraints.

In this talk, we will present an alternative by introducing the family of so-called low-rank inducing norms as convexifiers. Each norm is the convex envelope of a unitarily invariant norm plus a rank constraint. Therefore, they have several interesting properties, which will be discussed throughout the talk. They:

i. Give a simple deterministic test if the solution to the convexified problem is a solution to a specific non-convex problem.

ii. Often finds solutions where the nuclear norm fails to give low-rank solutions.

iii. Allow us to analyze the convergence of non-convex proximal splitting algorithms with convex analysis tools.

iv. Provide a more efficient regularization than the traditional scalar multiplication of the nuclear norm.

v. Leads to a different interpretation of the nuclear norm than the one that is traditionally presented.

In particular, all the results can be generalized to so-called atomic norms.

This talk is part of the CUED Control Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity