scholarly journals Numerical Algorithms for Computing an Arbitrary Singular Value of a Tensor Sum

Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 211
Author(s):  
Asuka Ohashi ◽  
Tomohiro Sogabe

We consider computing an arbitrary singular value of a tensor sum: T:=In⊗Im⊗A+In⊗B⊗Iℓ+C⊗Im⊗Iℓ∈Rℓmn×ℓmn, where A∈Rℓ×ℓ, B∈Rm×m, C∈Rn×n. We focus on the shift-and-invert Lanczos method, which solves a shift-and-invert eigenvalue problem of (TTT−σ˜2Iℓmn)−1, where σ˜ is set to a scalar value close to the desired singular value. The desired singular value is computed by the maximum eigenvalue of the eigenvalue problem. This shift-and-invert Lanczos method needs to solve large-scale linear systems with the coefficient matrix TTT−σ˜2Iℓmn. The preconditioned conjugate gradient (PCG) method is applied since the direct methods cannot be applied due to the nonzero structure of the coefficient matrix. However, it is difficult in terms of memory requirements to simply implement the shift-and-invert Lanczos and the PCG methods since the size of T grows rapidly by the sizes of A, B, and C. In this paper, we present the following two techniques: (1) efficient implementations of the shift-and-invert Lanczos method for the eigenvalue problem of TTT and the PCG method for TTT−σ˜2Iℓmn using three-dimensional arrays (third-order tensors) and the n-mode products, and (2) preconditioning matrices of the PCG method based on the eigenvalue and the Schur decomposition of T. Finally, we show the effectiveness of the proposed methods through numerical experiments.

2019 ◽  
Vol 16 (07) ◽  
pp. 1950038 ◽  
Author(s):  
S. H. Ju ◽  
H. H. Hsu

An out-of-core block Lanczos method with the OpenMP parallel scheme was developed to solve large spare damped eigenproblems. The symmetric generalized eigenproblem is first solved using the block Lanczos method with the preconditioned conjugate gradient (PCG) method, and the condensed damped eigenproblem is then solved to obtain the complex eigenvalues. Since the PCG solvers and out-of-core schemes are used, a large-scale eigenproblem can be solved using minimal computer memory. The out-of-core arrays only need to be read once in each Lanczos iteration, so the proposed method requires little extra CPU time. In addition, the second-level OpenMP parallel computation in the PCG solver is suggested to avoid using a large block size that often increases the number of iterations needed to achieve convergence.


Author(s):  
C W Kim

The component mode synthesis (CMS) method has been extensively used in industries. However, industry finite-element (FE) models need a more efficient CMS method for satisfactory performance since the size of FE models needs to be increased for a more accurate analysis. Recently, the recursive component mode synthesis (RCMS) method was introduced to solve large-scale eigenvalue problem efficiently. This article focuses on the convergence of the RCMS method with respect to different parameters, and evaluates the accuracy and performance compared with the Lanczos method.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ligang Cao ◽  
Yun Liu

The ideal numerical simulation of 3D magnetotelluric was restricted by the methodology complexity and the time-consuming calculation. Boundary values, the variation of weighted residual equation, and the hexahedral mesh generation method of finite element are three major causes. A finite element method for 3D magnetotelluric numerical modeling is presented in this paper as a solution for the problem mentioned above. In this algorithm, a hexahedral element coefficient matrix for magnetoelluric finite method is developed, which solves large-scale equations using preconditioned conjugate gradient of the first-type boundary conditions. This algorithm is verified using the homogeneous model, and the positive landform model, as well as the low resistance anomaly model.


2013 ◽  
Vol 838-841 ◽  
pp. 718-721
Author(s):  
Kun Yong Zhang ◽  
Gui Heng Xie

To solve large symmetric indefinite linear systems in finite element discretization of 3D Biot's consolidation equations.This paper adopted diagonal preconditioned conjugate gradient method to FE program. Several numerical examples show that the diagonal PCG method are significantly more efficient than direct solution method for large-scale symmetric indefinite linear systems.


2018 ◽  
Vol 82 (2) ◽  
pp. 699-717 ◽  
Author(s):  
Zhigang Jia ◽  
Michael K. Ng ◽  
Guang-Jing Song

2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Mu-Zheng Zhu ◽  
Guo-Feng Zhang ◽  
Ya-E Qi

Abstract By exploiting Toeplitz-like structure and non-Hermitian dense property of the discrete coefficient matrix, a new double-layer iterative method called SHSS-PCG method is employed to solve the linear systems originating from the implicit finite difference discretization of fractional diffusion equations (FDEs). The method is a combination of the single-step Hermitian and skew-Hermitian splitting (SHSS) method with the preconditioned conjugate gradient (PCG) method. Further, the new circulant preconditioners are proposed to improve the efficiency of SHSS-PCG method, and the computation cost is further reduced via using the fast Fourier transform (FFT). Theoretical analysis shows that the SHSS-PCG iterative method with circulant preconditioners is convergent. Numerical experiments are given to show that our SHSS-PCG method with circulant preconditioners preforms very well, and the proposed circulant preconditioners are very efficient in accelerating the convergence rate.


PAMM ◽  
2018 ◽  
Vol 18 (1) ◽  
Author(s):  
Peter Benner ◽  
Andreas Marek ◽  
Carolin Penke

2021 ◽  
Author(s):  
Shalin Shah

Recommender systems aim to personalize the experience of user by suggesting items to the user based on the preferences of a user. The preferences are learned from the user’s interaction history or through explicit ratings that the user has given to the items. The system could be part of a retail website, an online bookstore, a movie rental service or an online education portal and so on. In this paper, I will focus on matrix factorization algorithms as applied to recommender systems and discuss the singular value decomposition, gradient descent-based matrix factorization and parallelizing matrix factorization for large scale applications.


Sign in / Sign up

Export Citation Format

Share Document