orthogonal matrix
Recently Published Documents


TOTAL DOCUMENTS

207
(FIVE YEARS 17)

H-INDEX

19
(FIVE YEARS 1)

2021 ◽  
Vol 4 ◽  
pp. 10-15
Author(s):  
Gennadii Malaschonok ◽  
Serhii Sukharskyi

With the development of the Big Data sphere, as well as those fields of study that we can relate to artificial intelligence, the need for fast and efficient computing has become one of the most important tasks nowadays. That is why in the recent decade, graphics processing unit computations have been actively developing to provide an ability for scientists and developers to use thousands of cores GPUs have in order to perform intensive computations. The goal of this research is to implement orthogonal decomposition of a matrix by applying a series of Householder transformations in Java language using JCuda library to conduct a research on its benefits. Several related papers were examined. Malaschonok and Savchenko in their work have introduced an improved version of QR algorithm for this purpose [4] and achieved better results, however Householder algorithm is more promising for GPUs according to another team of researchers – Lahabar and Narayanan [6]. However, they were using Float numbers, while we are using Double, and apart from that we are working on a new BigDecimal type for CUDA. Apart from that, there is still no solution for handling huge matrices where errors in calculations might occur. The algorithm of orthogonal matrix decomposition, which is the first part of SVD algorithm, is researched and implemented in this work. The implementation of matrix bidiagonalization and calculation of orthogonal factors by the Hausholder method in the jCUDA environment on a graphics processor is presented, and the algorithm for the central processor for comparisons is also implemented. Research of the received results where we experimentally measured acceleration of calculations with the use of the graphic processor in comparison with the implementation on the central processor are carried out. We show a speedup up to 53 times compared to CPU implementation on a big matrix size, specifically 2048, and even better results when using more advanced GPUs. At the same time, we still experience bigger errors in calculations while using graphic processing units due to synchronization problems. We compared execution on different platforms (Windows 10 and Arch Linux) and discovered that they are almost the same, taking the computation speed into account. The results have shown that on GPU we can achieve better performance, however there are more implementation difficulties with this approach.


Author(s):  
Erhan Ata ◽  
Ümi̇t Zi̇ya Savci

In this study, we obtained generalized Cayley formula, Rodrigues equation and Euler parameters of an orthogonal matrix in 3-dimensional generalized space [Formula: see text]. It is shown that unit generalized quaternion, which is defined by the generalized Euler parameters, corresponds to a rotation in [Formula: see text] space.We found that the rotation in matrix equation forms using matrix form of the generalized quaternion product. Besides, in [Formula: see text] space, we obtained the rotations determined by the unit quaternions and unit split quaternions, which are special cases of generalized quaternions for [Formula: see text] in 3-dimensional Eulidean space [Formula: see text] in 3-dimensional Lorentzian space [Formula: see text] respectively.


Author(s):  
Luca Bagnato ◽  
Antonio Punzo

Abstract Many statistical problems involve the estimation of a $$\left( d\times d\right) $$ d × d orthogonal matrix $$\varvec{Q}$$ Q . Such an estimation is often challenging due to the orthonormality constraints on $$\varvec{Q}$$ Q . To cope with this problem, we use the well-known PLU decomposition, which factorizes any invertible $$\left( d\times d\right) $$ d × d matrix as the product of a $$\left( d\times d\right) $$ d × d permutation matrix $$\varvec{P}$$ P , a $$\left( d\times d\right) $$ d × d unit lower triangular matrix $$\varvec{L}$$ L , and a $$\left( d\times d\right) $$ d × d upper triangular matrix $$\varvec{U}$$ U . Thanks to the QR decomposition, we find the formulation of $$\varvec{U}$$ U when the PLU decomposition is applied to $$\varvec{Q}$$ Q . We call the result as PLR decomposition; it produces a one-to-one correspondence between $$\varvec{Q}$$ Q and the $$d\left( d-1\right) /2$$ d d - 1 / 2 entries below the diagonal of $$\varvec{L}$$ L , which are advantageously unconstrained real values. Thus, once the decomposition is applied, regardless of the objective function under consideration, we can use any classical unconstrained optimization method to find the minimum (or maximum) of the objective function with respect to $$\varvec{L}$$ L . For illustrative purposes, we apply the PLR decomposition in common principle components analysis (CPCA) for the maximum likelihood estimation of the common orthogonal matrix when a multivariate leptokurtic-normal distribution is assumed in each group. Compared to the commonly used normal distribution, the leptokurtic-normal has an additional parameter governing the excess kurtosis; this makes the estimation of $$\varvec{Q}$$ Q in CPCA more robust against mild outliers. The usefulness of the PLR decomposition in leptokurtic-normal CPCA is illustrated by two biometric data analyses.


Sign in / Sign up

Export Citation Format

Share Document