Improved Results on Finite-Time Stability Analysis of Neural Networks With Time-Varying Delays

Author(s):  
S. Saravanan ◽  
M. Syed Ali

This paper investigates the issue of finite time stability analysis of time-delayed neural networks by introducing a new Lyapunov functional which uses the information on the delay sufficiently and an augmented Lyapunov functional which contains some triple integral terms. Some improved delay-dependent stability criteria are derived using Jensen's inequality, reciprocally convex combination methods. Then, the finite-time stability conditions are solved by the linear matrix inequalities (LMIs). Numerical examples are finally presented to verify the effectiveness of the obtained results.

2021 ◽  
Vol 19 (3) ◽  
pp. 199
Author(s):  
Sreten Stojanović ◽  
Miloš Stevanović ◽  
Dragan Antić ◽  
Milan Stojanović

In this paper, we present the problem of stability, finite-time stability and passivity for discrete-time neural networks (DNNs) with variable delays. For the purposes of stability analysis, an augmented Lyapunov-Krasovskii functional (LKF) with single and double summation terms and several augmented vectors is proposed by decomposing the time-delay interval into two non-equidistant subintervals. Then, by using the Wirtinger-based inequality, reciprocally and extended reciprocally convex combination lemmas, tight estimations for sum terms in the forward difference of LKF are given. In order to relax the existing results, several zero equalities are introduced and stability criteria are proposed in terms of linear matrix inequalities (LMIs). The main objective for the finite-time stability and passivity analysis is how to effectively evaluate the finite-time passivity conditions for DNNs. To achieve this, some weighted summation inequalities are proposed for application to a finite-sum term appearing in the forward difference of LKF, which helps to ensure that the considered delayed DNN is passive. The derived passivity criteria are presented in terms of linear matrix inequalities. Some numerical examples are presented to illustrate the proposed methodology.


2017 ◽  
Vol 238 ◽  
pp. 67-75 ◽  
Author(s):  
Mingwen Zheng ◽  
Lixiang Li ◽  
Haipeng Peng ◽  
Jinghua Xiao ◽  
Yixian Yang ◽  
...  

2015 ◽  
Vol 152 ◽  
pp. 19-26 ◽  
Author(s):  
Xujun Yang ◽  
Qiankun Song ◽  
Yurong Liu ◽  
Zhenjiang Zhao

2012 ◽  
Vol 2012 ◽  
pp. 1-15 ◽  
Author(s):  
Weixiong Jin ◽  
Xiaoyang Liu ◽  
Xiangjun Zhao ◽  
Nan Jiang ◽  
Zhengxin Wang

This paper is concerned with the finite-time stabilization for a class of stochastic neural networks (SNNs) with noise perturbations. The purpose of the addressed problem is to design a nonlinear stabilizator which can stabilize the states of neural networks in finite time. Compared with the previous references, a continuous stabilizator is designed to realize such stabilization objective. Based on the recent finite-time stability theorem of stochastic nonlinear systems, sufficient conditions are established for ensuring the finite-time stability of the dynamics of SNNs in probability. Then, the gain parameters of the finite-time controller could be obtained by solving a linear matrix inequality and the robust finite-time stabilization could also be guaranteed for SNNs with uncertain parameters. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design method.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Hongjun Yu ◽  
Xiaozhan Yang ◽  
Chunfeng Wu ◽  
Qingshuang Zeng

This paper is concerned with global stability analysis for a class of continuous neural networks with time-varying delay. The lower and upper bounds of the delay and the upper bound of its first derivative are assumed to be known. By introducing a novel Lyapunov-Krasovskii functional, some delay-dependent stability criteria are derived in terms of linear matrix inequality, which guarantee the considered neural networks to be globally stable. When estimating the derivative of the LKF, instead of applying Jensen’s inequality directly, a substep is taken, and a slack variable is introduced by reciprocally convex combination approach, and as a result, conservatism reduction is proved to be more obvious than the available literature. Numerical examples are given to demonstrate the effectiveness and merits of the proposed method.


Author(s):  
Le Anh Tuan

This paper addresses the problem of finite-time boundedness for discrete-time neural networks with interval-like time-varying delays. First, a delay-dependent finite-time boundedness criterion under the finite-time  performance index for the system is given based on constructing a set of adjusted Lyapunov–Krasovskii functionals and using reciprocally convex approach. Next, a sufficient condition is drawn directly which ensures the finite-time stability of the corresponding nominal system. Finally, numerical examples are provided to illustrate the validity and applicability of the presented conditions. Keywords: Discrete-time neural networks,  performance, finite-time stability, time-varying delay, linear matrix inequality.  


Sign in / Sign up

Export Citation Format

Share Document