scholarly journals Computing Nearest Correlation Matrix via Low-Rank ODE’s Based Technique

Symmetry ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 1824
Author(s):  
Mutti-Ur Rehman ◽  
Jehad Alzabut ◽  
Kamaleldin Abodayeh

For n-dimensional real-valued matrix A, the computation of nearest correlation matrix; that is, a symmetric, positive semi-definite, unit diagonal and off-diagonal entries between −1 and 1 is a problem that arises in the finance industry where the correlations exist between the stocks. The proposed methodology presented in this article computes the admissible perturbation matrix and a perturbation level to shift the negative spectrum of perturbed matrix to become non-negative or strictly positive. The solution to optimization problems constructs a gradient system of ordinary differential equations that turn over the desired perturbation matrix. Numerical testing provides enough evidence for the shifting of the negative spectrum and the computation of nearest correlation matrix.

2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
An Liu ◽  
Erwie Zahara ◽  
Ming-Ta Yang

Ordinary differential equations usefully describe the behavior of a wide range of dynamic physical systems. The particle swarm optimization (PSO) method has been considered an effective tool for solving the engineering optimization problems for ordinary differential equations. This paper proposes a modified hybrid Nelder-Mead simplex search and particle swarm optimization (M-NM-PSO) method for solving parameter estimation problems. The M-NM-PSO method improves the efficiency of the PSO method and the conventional NM-PSO method by rapid convergence and better objective function value. Studies are made for three well-known cases, and the solutions of the M-NM-PSO method are compared with those by other methods published in the literature. The results demonstrate that the proposed M-NM-PSO method yields better estimation results than those obtained by the genetic algorithm, the modified genetic algorithm (real-coded GA (RCGA)), the conventional particle swarm optimization (PSO) method, and the conventional NM-PSO method.


Axioms ◽  
2018 ◽  
Vol 7 (3) ◽  
pp. 51 ◽  
Author(s):  
Carmela Scalone ◽  
Nicola Guglielmi

In this article we present and discuss a two step methodology to find the closest low rank completion of a sparse large matrix. Given a large sparse matrix M, the method consists of fixing the rank to r and then looking for the closest rank-r matrix X to M, where the distance is measured in the Frobenius norm. A key element in the solution of this matrix nearness problem consists of the use of a constrained gradient system of matrix differential equations. The obtained results, compared to those obtained by different approaches show that the method has a correct behaviour and is competitive with the ones available in the literature.


Author(s):  
Mikhail Krechetov ◽  
Jakub Marecek ◽  
Yury Maximov ◽  
Martin Takac

Low-rank methods for semi-definite programming (SDP) have gained a lot of interest recently, especially in machine learning applications. Their analysis often involves determinant-based or Schatten-norm penalties, which are difficult to implement in practice due to high computational efforts. In this paper, we propose Entropy-Penalized Semi-Definite Programming (EP-SDP), which provides a unified framework for a broad class of penalty functions used in practice to promote a low-rank solution. We show that EP-SDP problems admit an efficient numerical algorithm, having (almost) linear time complexity of the gradient computation; this makes it useful for many machine learning and optimization problems. We illustrate the practical efficiency of our approach on several combinatorial optimization and machine learning problems.


2020 ◽  
Vol 40 (4) ◽  
pp. 2626-2651
Author(s):  
André Uschmajew ◽  
Bart Vandereycken

Abstract The absence of spurious local minima in certain nonconvex low-rank matrix recovery problems has been of recent interest in computer science, machine learning and compressed sensing since it explains the convergence of some low-rank optimization methods to global optima. One such example is low-rank matrix sensing under restricted isometry properties (RIPs). It can be formulated as a minimization problem for a quadratic function on the Riemannian manifold of low-rank matrices, with a positive semidefinite Riemannian Hessian that acts almost like an identity on low-rank matrices. In this work new estimates for singular values of local minima for such problems are given, which lead to improved bounds on RIP constants to ensure absence of nonoptimal local minima and sufficiently negative curvature at all other critical points. A geometric viewpoint is taken, which is inspired by the fact that the Euclidean distance function to a rank-$k$ matrix possesses no critical points on the corresponding embedded submanifold of rank-$k$ matrices except for the single global minimum.


2009 ◽  
Vol 46 (04) ◽  
pp. 1130-1145 ◽  
Author(s):  
G. Deligiannidis ◽  
H. Le ◽  
S. Utev

In this paper we present an explicit solution to the infinite-horizon optimal stopping problem for processes with stationary independent increments, where reward functions admit a certain representation in terms of the process at a random time. It is shown that it is optimal to stop at the first time the process crosses a level defined as the root of an equation obtained from the representation of the reward function. We obtain an explicit formula for the value function in terms of the infimum and supremum of the process, by making use of the Wiener–Hopf factorization. The main results are applied to several problems considered in the literature, to give a unified approach, and to new optimization problems from the finance industry.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Lingchen Kong ◽  
Levent Tunçel ◽  
Naihua Xiu

Low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, and system identification and control. This class of optimization problems is generally𝒩𝒫hard. A popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend and characterize the concept ofs-goodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski (Math Program, 2011)) to linear transformations in LMR. Using the two characteristics-goodness constants,γsandγ^s, of a linear transformation, we derive necessary and sufficient conditions for a linear transformation to bes-good. Moreover, we establish the equivalence ofs-goodness and the null space properties. Therefore,s-goodness is a necessary and sufficient condition for exacts-rank matrix recovery via the nuclear norm minimization.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5051 ◽  
Author(s):  
Deyin Liu ◽  
Chengwu Liang ◽  
Zhiming Zhang ◽  
Lin Qi ◽  
Brian C. Lovell

Image set matching (ISM) has attracted increasing attention in the field of computer vision and pattern recognition. Some studies attempt to model query and gallery sets under a joint or collaborative representation framework, achieving impressive performance. However, existing models consider only the competition and collaboration among gallery sets, neglecting the inter-instance relationships within the query set which are also regarded as one important clue for ISM. In this paper, inter-instance relationships within the query set are explored for robust image set matching. Specifically, we propose to represent the query set instances jointly via a combined dictionary learned from the gallery sets. To explore the commonality and variations within the query set simultaneously to benefit the matching, both low rank and class-level sparsity constraints are imposed on the representation coefficients. Then, to deal with nonlinear data in real scenarios, the`kernelized version is also proposed. Moreover, to tackle the gross corruptions mixed in the query set, the proposed model is extended for robust ISM. The optimization problems are solved efficiently by employing singular value thresholding and block soft thresholding operators in an alternating direction manner. Experiments on five public datasets demonstrate the effectiveness of the proposed method, comparing favorably with state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document