Decomposition approach to the stability of recurrent neural networks with asynchronous time delays in quaternion field

2017 ◽  
Vol 94 ◽  
pp. 55-66 ◽  
Author(s):  
Dandan Zhang ◽  
Kit Ian Kou ◽  
Yang Liu ◽  
Jinde Cao
2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Lei Ding ◽  
Hong-Bing Zeng ◽  
Wei Wang ◽  
Fei Yu

This paper investigates the stability of static recurrent neural networks (SRNNs) with a time-varying delay. Based on the complete delay-decomposing approach and quadratic separation framework, a novel Lyapunov-Krasovskii functional is constructed. By employing a reciprocally convex technique to consider the relationship between the time-varying delay and its varying interval, some improved delay-dependent stability conditions are presented in terms of linear matrix inequalities (LMIs). Finally, a numerical example is provided to show the merits and the effectiveness of the proposed methods.


2021 ◽  
pp. 1-43
Author(s):  
Alfred Rajakumar ◽  
John Rinzel ◽  
Zhe S. Chen

Abstract Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics (“neural sequences”) of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.


1999 ◽  
Vol 09 (02) ◽  
pp. 95-98 ◽  
Author(s):  
ANKE MEYER-BÄSE

This paper is concerned with the asymptotic hyperstability of recurrent neural networks. We derive based on the stability results necessary and sufficient conditions for the network parameters. The results we achieve are more general than those based on Lyapunov methods, since they provide milder constraints on the connection weights than the conventional results and do not suppose symmetry of the weights.


2010 ◽  
Vol 88 (12) ◽  
pp. 885-898 ◽  
Author(s):  
R. Raja ◽  
R. Sakthivel ◽  
S. Marshal Anthoni

This paper investigates the stability issues for a class of discrete-time stochastic neural networks with mixed time delays and impulsive effects. By constructing a new Lyapunov–Krasovskii functional and combining with the linear matrix inequality (LMI) approach, a novel set of sufficient conditions are derived to ensure the global asymptotic stability of the equilibrium point for the addressed discrete-time neural networks. Then the result is extended to address the problem of robust stability of uncertain discrete-time stochastic neural networks with impulsive effects. One important feature in this paper is that the stability of the equilibrium point is proved under mild conditions on the activation functions, and it is not required to be differentiable or strictly monotonic. In addition, two numerical examples are provided to show the effectiveness of the proposed method, while being less conservative.


Sign in / Sign up

Export Citation Format

Share Document