scholarly journals ZNN models for computing matrix inverse based on hyperpower iterative methods

Filomat ◽  
2017 ◽  
Vol 31 (10) ◽  
pp. 2999-3014 ◽  
Author(s):  
Igor Stojanovic ◽  
Predrag Stanimirovic ◽  
Ivan Zivkovic ◽  
Dimitrios Gerontitis ◽  
Xue-Zhong Wang

Our goal is to investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing generalized inverses is defined on the basis of the discovered analogy. The Simulink implementation in Matlab of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. Convergence properties of the proposed ZNN models are investigated as well as their numerical behavior.

Mathematics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 2
Author(s):  
Santiago Artidiello ◽  
Alicia Cordero ◽  
Juan R. Torregrosa ◽  
María P. Vassileva

A secant-type method is designed for approximating the inverse and some generalized inverses of a complex matrix A. For a nonsingular matrix, the proposed method gives us an approximation of the inverse and, when the matrix is singular, an approximation of the Moore–Penrose inverse and Drazin inverse are obtained. The convergence and the order of convergence is presented in each case. Some numerical tests allowed us to confirm the theoretical results and to compare the performance of our method with other known ones. With these results, the iterative methods with memory appear for the first time for estimating the solution of a nonlinear matrix equations.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 144
Author(s):  
Sergio Amat ◽  
Sonia Busquier ◽  
Miguel Ángel Hernández-Verón ◽  
Ángel Alberto Magreñán

This paper is devoted to the approximation of matrix pth roots. We present and analyze a family of algorithms free of inverses. The method is a combination of two families of iterative methods. The first one gives an approximation of the matrix inverse. The second family computes, using the first method, an approximation of the matrix pth root. We analyze the computational cost and the convergence of this family of methods. Finally, we introduce several numerical examples in order to check the performance of this combination of schemes. We conclude that the method without inverse emerges as a good alternative since a similar numerical behavior with smaller computational cost is obtained.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4002 ◽  
Author(s):  
Vahid Tavakkoli ◽  
Jean Chamberlain Chedjou ◽  
Kyandoghere Kyamakya

The concept presented in this paper is based on previous dynamical methods to realize a time-varying matrix inversion. It is essentially a set of coupled ordinary differential equations (ODEs) which does indeed constitute a recurrent neural network (RNN) model. The coupled ODEs constitute a universal modeling framework for realizing a matrix inversion provided the matrix is invertible. The proposed model does converge to the inverted matrix if the matrix is invertible, otherwise it converges to an approximated inverse. Although various methods exist to solve a matrix inversion in various areas of science and engineering, most of them do assume that either the time-varying matrix inversion is free of noise or they involve a denoising module before starting the matrix inversion computation. However, in the practice, the noise presence issue is a very serious problem. Also, the denoising process is computationally expensive and can lead to a violation of the real-time property of the system. Hence, the search for a new ‘matrix inversion’ solving method inherently integrating noise-cancelling is highly demanded. In this paper, a new combined/extended method for time-varying matrix inversion is proposed and investigated. The proposed method is extending both the gradient neural network (GNN) and the Zhang neural network (ZNN) concepts. Our new model has proven that it has exponential stability according to Lyapunov theory. Furthermore, when compared to the other previous related methods (namely GNN, ZNN, Chen neural network, and integration-enhanced Zhang neural network or IEZNN) it has a much better theoretical convergence speed. To finish, all named models (the new one versus the old ones) are compared through practical examples and both their respective convergence and error rates are measured. It is shown/observed that the novel/proposed method has a better practical convergence rate when compared to the other models. Regarding the amount of noise, it is proven that there is a very good approximation of the matrix inverse even in the presence of noise.


2020 ◽  
Vol 96 (3s) ◽  
pp. 543-548
Author(s):  
Н.Н. Балан ◽  
А.А. Березин ◽  
Е.С. Горнев ◽  
В.В. Иванов ◽  
Е.В. Ипатова ◽  
...  

Работа посвящена вопросам применения нейросетевых алгоритмов в литографических расчетах. Дан обзор основного круга задач вычислительной литографии, допускающих целесообразность применения нейросетей для их решения. Описаны преимущества и недостатки нейросетевых решений, рекомендуемых для использования в рассматриваемых задачах. This paper is dedicated to the task of applying neural network-based algorithms to lithographic calculations. It reviews the family of problems in computational lithography to which neural networks are applicable. Pros and cons of such solutions have been discussed.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2710
Author(s):  
Shivam Barwey ◽  
Venkat Raman

High-fidelity simulations of turbulent flames are computationally expensive when using detailed chemical kinetics. For practical fuels and flow configurations, chemical kinetics can account for the vast majority of the computational time due to the highly non-linear nature of multi-step chemistry mechanisms and the inherent stiffness of combustion chemistry. While reducing this cost has been a key focus area in combustion modeling, the recent growth in graphics processing units (GPUs) that offer very fast arithmetic processing, combined with the development of highly optimized libraries for artificial neural networks used in machine learning, provides a unique pathway for acceleration. The goal of this paper is to recast Arrhenius kinetics as a neural network using matrix-based formulations. Unlike ANNs that rely on data, this formulation does not require training and exactly represents the chemistry mechanism. More specifically, connections between the exact matrix equations for kinetics and traditional artificial neural network layers are used to enable the usage of GPU-optimized linear algebra libraries without the need for modeling. Regarding GPU performance, speedup and saturation behaviors are assessed for several chemical mechanisms of varying complexity. The performance analysis is based on trends for absolute compute times and throughput for the various arithmetic operations encountered during the source term computation. The goals are ultimately to provide insights into how the source term calculations scale with the reaction mechanism complexity, which types of reactions benefit the GPU formulations most, and how to exploit the matrix-based formulations to provide optimal speedup for large mechanisms by using sparsity properties. Overall, the GPU performance for the species source term evaluations reveals many informative trends with regards to the effect of cell number on device saturation and speedup. Most importantly, it is shown that the matrix-based method enables highly efficient GPU performance across the board, achieving near-peak performance in saturated regimes.


Algorithms ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 101
Author(s):  
Alicia Cordero ◽  
Marlon Moscoso-Martínez ◽  
Juan R. Torregrosa

In this paper, we present a new parametric family of three-step iterative for solving nonlinear equations. First, we design a fourth-order triparametric family that, by holding only one of its parameters, we get to accelerate its convergence and finally obtain a sixth-order uniparametric family. With this last family, we study its convergence, its complex dynamics (stability), and its numerical behavior. The parameter spaces and dynamical planes are presented showing the complexity of the family. From the parameter spaces, we have been able to determine different members of the family that have bad convergence properties, as attracting periodic orbits and attracting strange fixed points appear in their dynamical planes. Moreover, this same study has allowed us to detect family members with especially stable behavior and suitable for solving practical problems. Several numerical tests are performed to illustrate the efficiency and stability of the presented family.


Sign in / Sign up

Export Citation Format

Share Document