A self-adaptive regularized alternating least squares method for tensor decomposition problems

2019 ◽  
Vol 18 (01) ◽  
pp. 129-147 ◽  
Author(s):  
Xianpeng Mao ◽  
Gonglin Yuan ◽  
Yuning Yang

Though the alternating least squares algorithm (ALS), as a classic and easily implemented algorithm, has been widely applied to tensor decomposition and approximation problems, it has some drawbacks: the convergence of ALS is not guaranteed, and the swamp phenomenon appears in some cases, causing the convergence rate to slow down dramatically. To overcome these shortcomings, the regularized-ALS algorithm (RALS) was proposed in the literature. By employing the optimal step-size selection rule, we develop a self-adaptive regularized alternating least squares method (SA-RALS) to accelerate RALS in this paper. Theoretically, we show that the step-size is always larger than unity, and can be larger than [Formula: see text], which is quite different from several optimization algorithms. Furthermore, under mild assumptions, we prove that the whole sequence generated by SA-RALS converges to a stationary point of the objective function. Numerical results verify that the SA-RALS performs better than RALS in terms of the number of iterations and the CPU time.

Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 278
Author(s):  
Ming-Feng Yeh ◽  
Ming-Hung Chang

The only parameters of the original GM(1,1) that are generally estimated by the ordinary least squares method are the development coefficient a and the grey input b. However, the weight of the background value, denoted as λ, cannot be obtained simultaneously by such a method. This study, therefore, proposes two simple transformation formulations such that the unknown parameters, and can be simultaneously estimated by the least squares method. Therefore, such a grey model is termed the GM(1,1;λ). On the other hand, because the permission zone of the development coefficient is bounded, the parameter estimation of the GM(1,1) could be regarded as a bound-constrained least squares problem. Since constrained linear least squares problems generally can be solved by an iterative approach, this study applies the Matlab function lsqlin to solve such constrained problems. Numerical results show that the proposed GM(1,1;λ) performs better than the GM(1,1) in terms of its model fitting accuracy and its forecasting precision.


Sign in / Sign up

Export Citation Format

Share Document