The extended nonsymmetric block Lanczos methods for solving large-scale differential Lyapunov equations

2021 ◽  
Vol 8 (3) ◽  
pp. 526-536
Author(s):  
L. Sadek ◽  
◽  
H. Talibi Alaoui ◽  

In this paper, we present a new approach for solving large-scale differential Lyapunov equations. The proposed approach is based on projection of the initial problem onto an extended block Krylov subspace by using extended nonsymmetric block Lanczos algorithm then, we get a low-dimensional differential Lyapunov matrix equation. The latter differential matrix equation is solved by the Backward Differentiation Formula method (BDF) or Rosenbrock method (ROS), the obtained solution allows to build a low-rank approximate solution of the original problem. Moreover, we also give some theoretical results. The numerical results demonstrate the performance of our approach.

Acta Numerica ◽  
2003 ◽  
Vol 12 ◽  
pp. 267-319 ◽  
Author(s):  
Roland W. Freund

In recent years, reduced-order modelling techniques based on Krylov-subspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools for tackling the large-scale time-invariant linear dynamical systems that arise in the simulation of electronic circuits. This paper reviews the main ideas of reduced-order modelling techniques based on Krylov subspaces and describes some applications of reduced-order modelling in circuit simulation.


2020 ◽  
Vol 60 (4) ◽  
pp. 1221-1259 ◽  
Author(s):  
Patrick Kürschner ◽  
Melina A. Freitag

CALCOLO ◽  
2019 ◽  
Vol 56 (4) ◽  
Author(s):  
Maximilian Behr ◽  
Peter Benner ◽  
Jan Heiland

AbstractThe differential Sylvester equation and its symmetric version, the differential Lyapunov equation, appear in different fields of applied mathematics like control theory, system theory, and model order reduction. The few available straight-forward numerical approaches when applied to large-scale systems come with prohibitively large storage requirements. This shortage motivates us to summarize and explore existing solution formulas for these equations. We develop a unifying approach based on the spectral theorem for normal operators like the Sylvester operator $${\mathcal {S}}(X)=AX+XB$$S(X)=AX+XB and derive a formula for its norm using an induced operator norm based on the spectrum of A and B. In view of numerical approximations, we propose an algorithm that identifies a suitable Krylov subspace using Taylor series and use a projection to approximate the solution. Numerical results for large-scale differential Lyapunov equations are presented in the last sections.


2016 ◽  
Vol 40 (3) ◽  
pp. 995-1004 ◽  
Author(s):  
Caiqin Song ◽  
Guoliang Chen

The solution of the nonhomogeneous Yakubovich matrix equation [Formula: see text] is important in stability analysis and controller design in linear systems. The nonhomogeneous Yakubovich matrix equation [Formula: see text], which contains the well-known Kalman–Yakubovich matrix equation and the general discrete Lyapunov matrix equation as special cases, is investigated in this paper. Closed-form solutions to the nonhomogeneous Yakubovich matrix equation are presented using the Smith normal form reduction. Its equivalent form is provided. Compared with the existing method, the method presented in this paper has no limit to the dimensions of an unknown matrix. The present method is suitable for any unknown matrix, not only low-dimensional unknown matrices, but also high-dimensional unknown matrices. As an application, parametric pole assignment for descriptor linear systems by PD feedback is considered.


2022 ◽  
pp. 17-25
Author(s):  
Nancy Jan Sliper

Experimenters today frequently quantify millions or even billions of characteristics (measurements) each sample to address critical biological issues, in the hopes that machine learning tools would be able to make correct data-driven judgments. An efficient analysis requires a low-dimensional representation that preserves the differentiating features in data whose size and complexity are orders of magnitude apart (e.g., if a certain ailment is present in the person's body). While there are several systems that can handle millions of variables and yet have strong empirical and conceptual guarantees, there are few that can be clearly understood. This research presents an evaluation of supervised dimensionality reduction for large scale data. We provide a methodology for expanding Principal Component Analysis (PCA) by including category moment estimations in low-dimensional projections. Linear Optimum Low-Rank (LOLR) projection, the cheapest variant, includes the class-conditional means. We show that LOLR projections and its extensions enhance representations of data for future classifications while retaining computing flexibility and reliability using both experimental and simulated data benchmark. When it comes to accuracy, LOLR prediction outperforms other modular linear dimension reduction methods that require much longer computation times on conventional computers. LOLR uses more than 150 million attributes in brain image processing datasets, and many genome sequencing datasets have more than half a million attributes.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Mohammad-Sahadet Hossain ◽  
M. Monir Uddin

We have presented the efficient techniques for the solutions of large-scale sparse projected periodic discrete-time Lyapunov equations in lifted form. These types of problems arise in model reduction and state feedback problems of periodic descriptor systems. Two most popular techniques to solve such Lyapunov equations iteratively are the low-rank alternating direction implicit (LR-ADI) method and the low-rank Smith method. The main contribution of this paper is to update the LR-ADI method by exploiting the ideas of the adaptive shift parameters computation and the efficient handling of complex shift parameters. These approaches efficiently reduce the computational cost with respect to time and memory. We also apply these iterative Lyapunov solvers in balanced truncation model reduction of periodic discrete-time descriptor systems. We illustrate numerical results to show the performance and accuracy of the proposed methods.


Author(s):  
Xianglan Bai ◽  
Alessandro Buccini ◽  
Lothar Reichel

AbstractRandomized methods can be competitive for the solution of problems with a large matrix of low rank. They also have been applied successfully to the solution of large-scale linear discrete ill-posed problems by Tikhonov regularization (Xiang and Zou in Inverse Probl 29:085008, 2013). This entails the computation of an approximation of a partial singular value decomposition of a large matrix A that is of numerical low rank. The present paper compares a randomized method to a Krylov subspace method based on Golub–Kahan bidiagonalization with respect to accuracy and computing time and discusses characteristics of linear discrete ill-posed problems that make them well suited for solution by a randomized method.


Sign in / Sign up

Export Citation Format

Share Document