matrix trace
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 8)

H-INDEX

12
(FIVE YEARS 1)

CAUCHY ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 200-211
Author(s):  
Rahmawati Rahmawati ◽  
Aryati Citra ◽  
Fitri Aryani ◽  
Corry Corazon Marzuki ◽  
Yuslenita Muda

The rectangle matrix to be discussed in this research is a special matrix where each entry in each line has the same value which is notated by An. The main aim of this paper is to find the general form of the matrix trace An powered positive integer m. To prove whether the general form of the matrix trace of An powered positive integer can be confirmed, mathematics induction and direct proof are used.  


Mathematics ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 723
Author(s):  
Yonggang Li ◽  
Jing Wang ◽  
Huafei Sun

The matrix eigenvalue is very important in matrix analysis, and it has been applied to matrix trace inequalities, such as the Lieb–Thirring–Araki theorem and Thompson–Golden theorem. In this manuscript, we obtain a matrix eigenvalue inequality by using the Stein–Hirschman operator interpolation inequality; then, according to the properties of exterior algebra and the Schur-convex function, we provide a new proof for the generalization of the Lieb–Thirring–Araki theorem and Furuta theorem.


Author(s):  
Akira Imakura ◽  
Momo Matsuda ◽  
Xiucai Ye ◽  
Tetsuya Sakurai

Dimensionality reduction methods that project highdimensional data to a low-dimensional space by matrix trace optimization are widely used for clustering and classification. The matrix trace optimization problem leads to an eigenvalue problem for a low-dimensional subspace construction, preserving certain properties of the original data. However, most of the existing methods use only a few eigenvectors to construct the low-dimensional space, which may lead to a loss of useful information for achieving successful classification. Herein, to overcome the deficiency of the information loss, we propose a novel complex moment-based supervised eigenmap including multiple eigenvectors for dimensionality reduction. Furthermore, the proposed method provides a general formulation for matrix trace optimization methods to incorporate with ridge regression, which models the linear dependency between covariate variables and univariate labels. To reduce the computational complexity, we also propose an efficient and parallel implementation of the proposed method. Numerical experiments indicate that the proposed method is competitive compared with the existing dimensionality reduction methods for the recognition performance. Additionally, the proposed method exhibits high parallel efficiency.


2018 ◽  
Vol 30 (11) ◽  
pp. 3095-3127 ◽  
Author(s):  
Kishan Wimalawarne ◽  
Makoto Yamada ◽  
Hiroshi Mamitsuka

We propose a set of convex low-rank inducing norms for coupled matrices and tensors (hereafter referred to as coupled tensors), in which information is shared between the matrices and tensors through common modes. More specifically, we first propose a mixture of the overlapped trace norm and the latent norms with the matrix trace norm, and then, propose a completion model regularized using these norms to impute coupled tensors. A key advantage of the proposed norms is that they are convex and can be used to find a globally optimal solution, whereas existing methods for coupled learning are nonconvex. We also analyze the excess risk bounds of the completion model regularized using our proposed norms and show that they can exploit the low-rankness of coupled tensors, leading to better bounds compared to those obtained using uncoupled norms. Through synthetic and real-data experiments, we show that the proposed completion model compares favorably with existing ones.


Sign in / Sign up

Export Citation Format

Share Document