scholarly journals Efficient Proximal Mapping Computation for Low-Rank Inducing Norms

Author(s):  
Christian Grussler ◽  
Pontus Giselsson

AbstractLow-rank inducing unitarily invariant norms have been introduced to convexify problems with a low-rank/sparsity constraint. The most well-known member of this family is the so-called nuclear norm. To solve optimization problems involving such norms with proximal splitting methods, efficient ways of evaluating the proximal mapping of the low-rank inducing norms are needed. This is known for the nuclear norm, but not for most other members of the low-rank inducing family. This work supplies a framework that reduces the proximal mapping evaluation into a nested binary search, in which each iteration requires the solution of a much simpler problem. The simpler problem can often be solved analytically as demonstrated for the so-called low-rank inducing Frobenius and spectral norms. The framework also allows to compute the proximal mapping of increasing convex functions composed with these norms as well as projections onto their epigraphs.

2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Lingchen Kong ◽  
Levent Tunçel ◽  
Naihua Xiu

Low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, and system identification and control. This class of optimization problems is generally𝒩𝒫hard. A popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend and characterize the concept ofs-goodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski (Math Program, 2011)) to linear transformations in LMR. Using the two characteristics-goodness constants,γsandγ^s, of a linear transformation, we derive necessary and sufficient conditions for a linear transformation to bes-good. Moreover, we establish the equivalence ofs-goodness and the null space properties. Therefore,s-goodness is a necessary and sufficient condition for exacts-rank matrix recovery via the nuclear norm minimization.


2018 ◽  
Vol 8 (1) ◽  
pp. 51-96 ◽  
Author(s):  
Qiuwei Li ◽  
Zhihui Zhu ◽  
Gongguo Tang

Abstract This work considers two popular minimization problems: (i) the minimization of a general convex function f(X) with the domain being positive semi-definite matrices, and (ii) the minimization of a general convex function f(X) regularized by the matrix nuclear norm $\|X\|_{*}$ with the domain being general matrices. Despite their optimal statistical performance in the literature, these two optimization problems have a high computational complexity even when solved using tailored fast convex solvers. To develop faster and more scalable algorithms, we follow the proposal of Burer and Monteiro to factor the low-rank variable $X = UU^{\top } $ (for semi-definite matrices) or $X=UV^{\top } $ (for general matrices) and also replace the nuclear norm $\|X\|_{*}$ with $\big(\|U\|_{F}^{2}+\|V\|_{F}^{2}\big)/2$. In spite of the non-convexity of the resulting factored formulations, we prove that each critical point either corresponds to the global optimum of the original convex problems or is a strict saddle where the Hessian matrix has a strictly negative eigenvalue. Such a nice geometric structure of the factored formulations allows many local-search algorithms to find a global optimizer even with random initializations.


Author(s):  
Edward Cheung ◽  
Yuying Li

The Frank-Wolfe (FW) algorithm has been widely used in solving nuclear norm constrained problems, since it does not require projections. However, FW often yields high rank intermediate iterates, which can be very expensive in time and space costs for large problems. To address this issue, we propose a rank-drop method for nuclear norm constrained problems. The goal is to generate descent steps that lead to rank decreases, maintaining low-rank solutions throughout the algorithm. Moreover, the optimization problems are constrained to ensure that the rank-drop step is also feasible and can be readily incorporated into a projection-free minimization method, e.g., Frank-Wolfe. We demonstrate that by incorporating rank-drop steps into the Frank-Wolfe algorithm, the rank of the solution is greatly reduced compared to the original Frank-Wolfe or its common variants.


Author(s):  
Mikhail Krechetov ◽  
Jakub Marecek ◽  
Yury Maximov ◽  
Martin Takac

Low-rank methods for semi-definite programming (SDP) have gained a lot of interest recently, especially in machine learning applications. Their analysis often involves determinant-based or Schatten-norm penalties, which are difficult to implement in practice due to high computational efforts. In this paper, we propose Entropy-Penalized Semi-Definite Programming (EP-SDP), which provides a unified framework for a broad class of penalty functions used in practice to promote a low-rank solution. We show that EP-SDP problems admit an efficient numerical algorithm, having (almost) linear time complexity of the gradient computation; this makes it useful for many machine learning and optimization problems. We illustrate the practical efficiency of our approach on several combinatorial optimization and machine learning problems.


2018 ◽  
Vol 35 (11) ◽  
pp. 1549-1566 ◽  
Author(s):  
Zhichao Xue ◽  
Jing Dong ◽  
Yuxin Zhao ◽  
Chang Liu ◽  
Ryad Chellali

2018 ◽  
Vol 68 ◽  
pp. 76-87 ◽  
Author(s):  
Jing Dong ◽  
Zhichao Xue ◽  
Jian Guan ◽  
Zi-Fa Han ◽  
Wenwu Wang

2020 ◽  
Vol 40 (4) ◽  
pp. 2626-2651
Author(s):  
André Uschmajew ◽  
Bart Vandereycken

Abstract The absence of spurious local minima in certain nonconvex low-rank matrix recovery problems has been of recent interest in computer science, machine learning and compressed sensing since it explains the convergence of some low-rank optimization methods to global optima. One such example is low-rank matrix sensing under restricted isometry properties (RIPs). It can be formulated as a minimization problem for a quadratic function on the Riemannian manifold of low-rank matrices, with a positive semidefinite Riemannian Hessian that acts almost like an identity on low-rank matrices. In this work new estimates for singular values of local minima for such problems are given, which lead to improved bounds on RIP constants to ensure absence of nonoptimal local minima and sufficiently negative curvature at all other critical points. A geometric viewpoint is taken, which is inspired by the fact that the Euclidean distance function to a rank-$k$ matrix possesses no critical points on the corresponding embedded submanifold of rank-$k$ matrices except for the single global minimum.


Sign in / Sign up

Export Citation Format

Share Document