scholarly journals AN IMPLICITLY RESTARTED BIDIAGONAL LANCZOS METHOD FOR LARGE-SCALE SINGULAR VALUE PROBLEMS

Author(s):  
XIAOHUI WANG ◽  
HONGYUAN ZHA
Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 211
Author(s):  
Asuka Ohashi ◽  
Tomohiro Sogabe

We consider computing an arbitrary singular value of a tensor sum: T:=In⊗Im⊗A+In⊗B⊗Iℓ+C⊗Im⊗Iℓ∈Rℓmn×ℓmn, where A∈Rℓ×ℓ, B∈Rm×m, C∈Rn×n. We focus on the shift-and-invert Lanczos method, which solves a shift-and-invert eigenvalue problem of (TTT−σ˜2Iℓmn)−1, where σ˜ is set to a scalar value close to the desired singular value. The desired singular value is computed by the maximum eigenvalue of the eigenvalue problem. This shift-and-invert Lanczos method needs to solve large-scale linear systems with the coefficient matrix TTT−σ˜2Iℓmn. The preconditioned conjugate gradient (PCG) method is applied since the direct methods cannot be applied due to the nonzero structure of the coefficient matrix. However, it is difficult in terms of memory requirements to simply implement the shift-and-invert Lanczos and the PCG methods since the size of T grows rapidly by the sizes of A, B, and C. In this paper, we present the following two techniques: (1) efficient implementations of the shift-and-invert Lanczos method for the eigenvalue problem of TTT and the PCG method for TTT−σ˜2Iℓmn using three-dimensional arrays (third-order tensors) and the n-mode products, and (2) preconditioning matrices of the PCG method based on the eigenvalue and the Schur decomposition of T. Finally, we show the effectiveness of the proposed methods through numerical experiments.


2018 ◽  
Vol 82 (2) ◽  
pp. 699-717 ◽  
Author(s):  
Zhigang Jia ◽  
Michael K. Ng ◽  
Guang-Jing Song

2021 ◽  
Author(s):  
Shalin Shah

Recommender systems aim to personalize the experience of user by suggesting items to the user based on the preferences of a user. The preferences are learned from the user’s interaction history or through explicit ratings that the user has given to the items. The system could be part of a retail website, an online bookstore, a movie rental service or an online education portal and so on. In this paper, I will focus on matrix factorization algorithms as applied to recommender systems and discuss the singular value decomposition, gradient descent-based matrix factorization and parallelizing matrix factorization for large scale applications.


Author(s):  
Khadija Ateya Almohsen ◽  
Huda Kadhim Al-Jobori

The increasing usage of e-commerce website has led to the emergence of Recommender System (RS) with the aim of personalizing the web content for each user. One of the successful techniques of RSs is Collaborative Filtering (CF) which makes recommendations for users based on what other like-mind users had preferred. However, as the world enter Big Data era, CF has faced some challenges such as: scalability, sparsity and cold start. Thus, new approaches that overcome the existing problems have been studied such as Singular Value Decomposition (SVD). This chapter surveys the literature of RSs, reviews the current state of RSs with the main concerns surrounding them due to Big Data, investigates thoroughly SVD and provides an implementation to it using Apache Hadoop and Spark. This is intended to validate the applicability of, existing contributions to the field of, SVD-based RSs as well as validated the effectiveness of Hadoop and spark in developing large-scale systems. The results proved the scalability of SVD-based RS and its applicability to Big Data.


2019 ◽  
Vol 16 (07) ◽  
pp. 1950038 ◽  
Author(s):  
S. H. Ju ◽  
H. H. Hsu

An out-of-core block Lanczos method with the OpenMP parallel scheme was developed to solve large spare damped eigenproblems. The symmetric generalized eigenproblem is first solved using the block Lanczos method with the preconditioned conjugate gradient (PCG) method, and the condensed damped eigenproblem is then solved to obtain the complex eigenvalues. Since the PCG solvers and out-of-core schemes are used, a large-scale eigenproblem can be solved using minimal computer memory. The out-of-core arrays only need to be read once in each Lanczos iteration, so the proposed method requires little extra CPU time. In addition, the second-level OpenMP parallel computation in the PCG solver is suggested to avoid using a large block size that often increases the number of iterations needed to achieve convergence.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jengnan Tzeng

The singular value decomposition (SVD) is a fundamental matrix decomposition in linear algebra. It is widely applied in many modern techniques, for example, high- dimensional data visualization, dimension reduction, data mining, latent semantic analysis, and so forth. Although the SVD plays an essential role in these fields, its apparent weakness is the order three computational cost. This order three computational cost makes many modern applications infeasible, especially when the scale of the data is huge and growing. Therefore, it is imperative to develop a fast SVD method in modern era. If the rank of matrix is much smaller than the matrix size, there are already some fast SVD approaches. In this paper, we focus on this case but with the additional condition that the data is considerably huge to be stored as a matrix form. We will demonstrate that this fast SVD result is sufficiently accurate, and most importantly it can be derived immediately. Using this fast method, many infeasible modern techniques based on the SVD will become viable.


Author(s):  
C W Kim

The component mode synthesis (CMS) method has been extensively used in industries. However, industry finite-element (FE) models need a more efficient CMS method for satisfactory performance since the size of FE models needs to be increased for a more accurate analysis. Recently, the recursive component mode synthesis (RCMS) method was introduced to solve large-scale eigenvalue problem efficiently. This article focuses on the convergence of the RCMS method with respect to different parameters, and evaluates the accuracy and performance compared with the Lanczos method.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. G25-G34 ◽  
Author(s):  
Saeed Vatankhah ◽  
Rosemary Anne Renaut ◽  
Vahid Ebrahimzadeh Ardestani

We develop a fast algorithm for solving the under-determined 3D linear gravity inverse problem based on randomized singular-value decomposition (RSVD). The algorithm combines an iteratively reweighted approach for [Formula: see text]-norm regularization with the RSVD methodology in which the large-scale linear system at each iteration is replaced with a much smaller linear system. Although the optimal choice for the low-rank approximation of the system matrix with [Formula: see text] rows is [Formula: see text], acceptable results are achievable with [Formula: see text]. In contrast to the use of the iterative LSQR algorithm for the solution of linear systems at each iteration, the singular values generated using RSVD yield a good approximation of the dominant singular values of the large-scale system matrix. Thus, the regularization parameter found for the small system at each iteration is dependent on the dominant singular values of the large-scale system matrix and appropriately regularizes the dominant singular space of the large-scale problem. The results achieved are comparable with those obtained using the LSQR algorithm for solving each linear system, but they are obtained at a reduced computational cost. The method has been tested on synthetic models along with real gravity data from the Morro do Engenho complex in central Brazil.


Sign in / Sign up

Export Citation Format

Share Document