symmetric positive definite
Recently Published Documents


TOTAL DOCUMENTS

302
(FIVE YEARS 62)

H-INDEX

33
(FIVE YEARS 3)

Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 255
Author(s):  
Xiaomin Duan ◽  
Xueting Ji ◽  
Huafei Sun ◽  
Hao Guo

A non-iterative method for the difference of means is presented to calculate the log-Euclidean distance between a symmetric positive-definite matrix and the mean matrix on the Lie group of symmetric positive-definite matrices. Although affine-invariant Riemannian metrics have a perfect theoretical framework and avoid the drawbacks of the Euclidean inner product, their complex formulas also lead to sophisticated and time-consuming algorithms. To make up for this limitation, log-Euclidean metrics with simpler formulas and faster calculations are employed in this manuscript. Our new approach is to transform a symmetric positive-definite matrix into a symmetric matrix via logarithmic maps, and then to transform the results back to the Lie group through exponential maps. Moreover, the present method does not need to compute the mean matrix and retains the usual Euclidean operations in the domain of matrix logarithms. In addition, for some randomly generated positive-definite matrices, the method is compared using experiments with that induced by the classical affine-invariant Riemannian metric. Finally, our proposed method is applied to denoise the point clouds with high density noise via the K-means clustering algorithm.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xi Liu ◽  
Zengrong Zhan ◽  
Guo Niu

Image recognition tasks involve an increasingly high amount of symmetric positive definite (SPD) matrices data. SPD manifolds exhibit nonlinear geometry, and Euclidean machine learning methods cannot be directly applied to SPD manifolds. The kernel trick of SPD manifolds is based on the concept of projecting data onto a reproducing kernel Hilbert space. Unfortunately, existing kernel methods do not consider the connection of SPD matrices and linear projections. Thus, a framework that uses the correlation between SPD matrices and projections to model the kernel map is proposed herein. To realize this, this paper formulates a Hilbert–Schmidt independence criterion (HSIC) regularization framework based on the kernel trick, where HSIC is usually used to express the interconnectedness of two datasets. The proposed framework allows us to extend the existing kernel methods to new HSIC regularization kernel methods. Additionally, this paper proposes an algorithm called HSIC regularized graph discriminant analysis (HRGDA) for SPD manifolds based on the HSIC regularization framework. The proposed HSIC regularization framework and HRGDA are highly accurate and valid based on experimental results on several classification tasks.


Author(s):  
Vitaliy Tayanov ◽  
Adam Krzyżak ◽  
Ching Y. Suen

This paper introduces a new topic and research of geometric classifier ensemble learning using two types of objects: classifier prediction pairwise matrix (CPPM) and decision profiles (DPs). Learning from CPPM requires using Riemannian manifolds (R-manifolds) of symmetric positive definite (SPD) matrices. DPs can be used to build a Grassmann manifold (G-manifold). Experimental results show that classifier ensembles and their cascades built using R-manifolds are less dependent on some properties of individual classifiers (e.g. depth of decision trees in random forests (RFs) or extra trees (ETs)) in comparison to G-manifolds and Euclidean geometry. More independent individual classifiers allow obtaining R-manifolds with better properties for classification. Generally, the accuracy of classification in nonlinear geometry is higher than in Euclidean one. For multi-class problems, G-manifolds perform similarly to stacking-based classifiers built on R-manifolds of SPD matrices in terms of classification accuracy.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1214
Author(s):  
Yihao Luo ◽  
Shiqiang Zhang ◽  
Yueqi Cao ◽  
Huafei Sun

The Wasserstein distance, especially among symmetric positive-definite matrices, has broad and deep influences on the development of artificial intelligence (AI) and other branches of computer science. In this paper, by involving the Wasserstein metric on SPD(n), we obtain computationally feasible expressions for some geometric quantities, including geodesics, exponential maps, the Riemannian connection, Jacobi fields and curvatures, particularly the scalar curvature. Furthermore, we discuss the behavior of geodesics and prove that the manifold is globally geodesic convex. Finally, we design algorithms for point cloud denoising and edge detecting of a polluted image based on the Wasserstein curvature on SPD(n). The experimental results show the efficiency and robustness of our curvature-based methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Gashaye Dessalew ◽  
Tesfaye Kebede ◽  
Gurju Awgichew ◽  
Assaye Walelign

In this paper, we present refinement of multiparameters overrelaxation (RMPOR) method which is used to solve the linear system of equations. We investigate its convergence properties for different matrices such as strictly diagonally dominant matrix, symmetric positive definite matrix, and M-matrix. The proposed method minimizes the number of iterations as compared with the multiparameter overrelaxation method. Its spectral radius is also minimum. To show the efficiency of the proposed method, we prove some theorems and take some numerical examples.


Author(s):  
Rhea Sanjay Sukthanker ◽  
Zhiwu Huang ◽  
Suryansh Kumar ◽  
Erik Goron Endsjo ◽  
Yan Wu ◽  
...  

In this paper, we propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks, aiming to automate the design of SPD neural architectures. To address this problem, we first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design. Further, we model our new NAS problem with a one-shot training process of a single supernet. Based on the supernet modeling, we exploit a differentiable NAS algorithm on our relaxed continuous search space for SPD neural architecture search. Statistical evaluation of our method on drone, action, and emotion recognition tasks mostly provides better results than the state-of-the-art SPD networks and traditional NAS algorithms. Empirical results show that our algorithm excels in discovering better performing SPD network design and provides models that are more than three times lighter than searched by the state-of-the-art NAS algorithms.


2021 ◽  
Vol 37 ◽  
pp. 549-561
Author(s):  
Paraskevi Fika ◽  
Marilena Mitrouli ◽  
Ondrej Turec

The central mathematical problem studied in this work is the estimation of the quadratic form $x^TA^{-1}x$ for a given symmetric positive definite matrix $A \in \mathbb{R}^{n \times n}$ and vector $x \in \mathbb{R}^n$. Several methods to estimate $x^TA^{-1}x$ without computing the matrix inverse are proposed. The precision of the estimates is analyzed both analytically and numerically.  


Sign in / Sign up

Export Citation Format

Share Document