scholarly journals Least Squares Based Iterative Algorithm for the Coupled Sylvester Matrix Equations

2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Hongcai Yin ◽  
Huamin Zhang

By analyzing the eigenvalues of the related matrices, the convergence analysis of the least squares based iteration is given for solving the coupled Sylvester equationsAX+YB=CandDX+YE=Fin this paper. The analysis shows that the optimal convergence factor of this iterative algorithm is 1. In addition, the proposed iterative algorithm can solve the generalized Sylvester equationAXB+CXD=F. The analysis demonstrates that if the matrix equation has a unique solution then the least squares based iterative solution converges to the exact solution for any initial values. A numerical example illustrates the effectiveness of the proposed algorithm.

2012 ◽  
Vol 2012 ◽  
pp. 1-28 ◽  
Author(s):  
Feng Yin ◽  
Guang-Xin Huang

An iterative algorithm is constructed to solve the generalized coupled Sylvester matrix equations(AXB-CYD,EXF-GYH)=(M,N), which includes Sylvester and Lyapunov matrix equations as special cases, over generalized reflexive matricesXandY. When the matrix equations are consistent, for any initial generalized reflexive matrix pair[X1,Y1], the generalized reflexive solutions can be obtained by the iterative algorithm within finite iterative steps in the absence of round-off errors, and the least Frobenius norm generalized reflexive solutions can be obtained by choosing a special kind of initial matrix pair. The unique optimal approximation generalized reflexive solution pair[X̂,Ŷ]to a given matrix pair[X0,Y0]in Frobenius norm can be derived by finding the least-norm generalized reflexive solution pair[X̃*,Ỹ*]of a new corresponding generalized coupled Sylvester matrix equation pair(AX̃B-CỸD,EX̃F-GỸH)=(M̃,Ñ), whereM̃=M-AX0B+CY0D,Ñ=N-EX0F+GY0H. Several numerical examples are given to show the effectiveness of the presented iterative algorithm.


2016 ◽  
Vol 40 (1) ◽  
pp. 341-347 ◽  
Author(s):  
Ahmed ME Bayoumi ◽  
Mohamed A Ramadan

In this paper, we present an accelerated gradient-based iterative algorithm for solving extended Sylvester–conjugate matrix equations. The idea is from the gradient-based method introduced in Wu et al. ( Applied Mathematics and Computation 217(1): 130–142, 2010a) and the relaxed gradient-based algorithm proposed in Ramadan et al. ( Asian Journal of Control 16(5): 1–8, 2014) and the modified gradient-based algorithm proposed in Bayoumi (PhD thesis, Ain Shams University, 2014). The convergence analysis of the algorithm is investigated. We show that the iterative solution converges to the exact solution for any initial value provided some appropriate assumptions be made. A numerical example is given to illustrate the effectiveness of the proposed method and to test its efficiency and accuracy compared with an existing one presented in Wu et al. (2010a), Ramadan et al. (2014) and Bayoumi (2014).


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Huamin Zhang

This paper is concerned with iterative solution to a class of the real coupled matrix equations. By using the hierarchical identification principle, a gradient-based iterative algorithm is constructed to solve the real coupled matrix equationsA1XB1+A2XB2=F1andC1XD1+C2XD2=F2. The range of the convergence factor is derived to guarantee that the iterative algorithm is convergent for any initial value. The analysis indicates that if the coupled matrix equations have a unique solution, then the iterative solution converges fast to the exact one for any initial value under proper conditions. A numerical example is provided to illustrate the effectiveness of the proposed algorithm.


2012 ◽  
Vol 2012 ◽  
pp. 1-6
Author(s):  
Xuefeng Duan ◽  
Chunmei Li

Based on the alternating projection algorithm, which was proposed by Von Neumann to treat the problem of finding the projection of a given point onto the intersection of two closed subspaces, we propose a new iterative algorithm to solve the matrix nearness problem associated with the matrix equations AXB=E, CXD=F, which arises frequently in experimental design. If we choose the initial iterative matrix X0=0, the least Frobenius norm solution of these matrix equations is obtained. Numerical examples show that the new algorithm is feasible and effective.


2013 ◽  
Vol 2013 ◽  
pp. 1-15
Author(s):  
Zhongli Zhou ◽  
Guangxin Huang

The general coupled matrix equations (including the generalized coupled Sylvester matrix equations as special cases) have numerous applications in control and system theory. In this paper, an iterative algorithm is constructed to solve the general coupled matrix equations over reflexive matrix solution. When the general coupled matrix equations are consistent over reflexive matrices, the reflexive solution can be determined automatically by the iterative algorithm within finite iterative steps in the absence of round-off errors. The least Frobenius norm reflexive solution of the general coupled matrix equations can be derived when an appropriate initial matrix is chosen. Furthermore, the unique optimal approximation reflexive solution to a given matrix group in Frobenius norm can be derived by finding the least-norm reflexive solution of the corresponding general coupled matrix equations. A numerical example is given to illustrate the effectiveness of the proposed iterative algorithm.


Filomat ◽  
2012 ◽  
Vol 26 (3) ◽  
pp. 607-613 ◽  
Author(s):  
Xiang Wang ◽  
Dan Liao

A hierarchical gradient based iterative algorithm of [L. Xie et al., Computers and Mathematics with Applications 58 (2009) 1441-1448] has been presented for finding the numerical solution for general linear matrix equations, and the convergent factor has been discussed by numerical experiments. However, they pointed out that how to choose a best convergence factor is still a project to be studied. In this paper, we discussed the optimal convergent factor for the gradient based iterative algorithm and obtained the optimal convergent factor. Moreover, the theoretical results of this paper can be extended to other methods of gradient-type based. Results of numerical experiments are consistent with the theoretical findings.


Author(s):  
Fatemeh Beik ◽  
Salman Ahmadi-Asl

Recently, some research has been devoted to finding the explicit forms of the η-Hermitian and η-anti-Hermitian solutions of several kinds of quaternion matrix equations and their associated least-squares problems in the literature. Although exploiting iterative algorithms is superior than utilizing the explicit forms in application, hitherto, an iterative approach has not been offered for finding η-(anti)-Hermitian solutions of quaternion matrix equations. The current paper deals with applying an efficient iterative manner for determining η-Hermitian and η-anti-Hermitian least-squares solutions corresponding to the quaternion matrix equation AXB + CY D = E. More precisely, first, this paper establishes some properties of the η-Hermitian and η-anti-Hermitian matrices. These properties allow for the demonstration of how the well-known conjugate gradient least- squares (CGLS) method can be developed for solving the mentioned problem over the η-Hermitian and η-anti-Hermitian matrices. In addition, the convergence properties of the proposed algorithm are discussed with details. In the circumstance that the coefficient matrices are ill-conditioned, it is suggested to use a preconditioner for accelerating the convergence behavior of the algorithm. Numerical experiments are reported to reveal the validity of the elaborated results and feasibility of the proposed iterative algorithm and its preconditioned version.


2020 ◽  
Vol 2020 ◽  
pp. 1-5
Author(s):  
Ehab A. El-Sayed ◽  
Eid E. El Behady

This paper considers a new method to solve the first-order and second-order nonhomogeneous generalized Sylvester matrix equations AV+BW= EVF+R and MVF2+DV F+KV=BW+R, respectively, where A,E,M,D,K,B, and F are the arbitrary real known matrices and V and W are the matrices to be determined. An explicit solution for these equations is proposed, based on the orthogonal reduction of the matrix F to an upper Hessenberg form H. The technique is very simple and does not require the eigenvalues of matrix F to be known. The proposed method is illustrated by numerical examples.


Sign in / Sign up

Export Citation Format

Share Document