The generalized bisymmetric solutions of the matrix equation A 1 X 1 B 1 + A 2 X 2 B 2 +  ⋯ + A l  X l  B l = C and its optimal approximation

2008 ◽  
Vol 50 (2) ◽  
pp. 127-144 ◽  
Author(s):  
Zhuo-hua Peng ◽  
Jin-wang Liu
2021 ◽  
Vol 7 (3) ◽  
pp. 3680-3691
Author(s):  
Huiting Zhang ◽  
◽  
Yuying Yuan ◽  
Sisi Li ◽  
Yongxin Yuan ◽  
...  

<abstract><p>In this paper, the least-squares solutions to the linear matrix equation $ A^{\ast}XB+B^{\ast}X^{\ast}A = D $ are discussed. By using the canonical correlation decomposition (CCD) of a pair of matrices, the general representation of the least-squares solutions to the matrix equation is derived. Moreover, the expression of the solution to the corresponding weighted optimal approximation problem is obtained.</p></abstract>


2007 ◽  
Vol 75 (2) ◽  
pp. 289-298 ◽  
Author(s):  
Konghua Guo ◽  
Xiyan Hu ◽  
Lei Zhang

An iteration method for the matrix equation A×B = C is constructed. By this iteration method, the least-norm solution for the matrix equation can be obtained when the matrix equation is consistent and the least-norm least-squares solutions can be obtained when the matrix equation is not consistent. The related optimal approximation solution is obtained by this iteration method. A preconditioned method for improving the iteration rate is put forward. Finally, some numerical examples are given.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Adisorn Kittisopaporn ◽  
Pattrawut Chansangiam ◽  
Wicharn Lewkeeratiyutkul

AbstractWe derive an iterative procedure for solving a generalized Sylvester matrix equation $AXB+CXD = E$ A X B + C X D = E , where $A,B,C,D,E$ A , B , C , D , E are conforming rectangular matrices. Our algorithm is based on gradients and hierarchical identification principle. We convert the matrix iteration process to a first-order linear difference vector equation with matrix coefficient. The Banach contraction principle reveals that the sequence of approximated solutions converges to the exact solution for any initial matrix if and only if the convergence factor belongs to an open interval. The contraction principle also gives the convergence rate and the error analysis, governed by the spectral radius of the associated iteration matrix. We obtain the fastest convergence factor so that the spectral radius of the iteration matrix is minimized. In particular, we obtain iterative algorithms for the matrix equation $AXB=C$ A X B = C , the Sylvester equation, and the Kalman–Yakubovich equation. We give numerical experiments of the proposed algorithm to illustrate its applicability, effectiveness, and efficiency.


1972 ◽  
Vol 15 (9) ◽  
pp. 820-826 ◽  
Author(s):  
R. H. Bartels ◽  
G. W. Stewart
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document