scholarly journals A toy model to investigate stability of AI-based dynamical systems

Author(s):  
Blanka Balogh ◽  
David Saint-Martin ◽  
Aurélien Ribes

<p>The development of atmospheric parameterizations based on neural networks is often hampered by numerical instability issues. Previous attempts to replicate these issues in a toy model have proven ineffective. We introduce a new toy model for atmospheric dynamics, which consists in an extension of the Lorenz'63 model to a higher dimension. While neural networks trained on a single orbit can easily reproduce the dynamics of the Lorenz'63 model, they fail to reproduce the dynamics of the new toy model, leading to unstable trajectories. Instabilities become more frequent as the dimension of the new model increases, but are found to occur even in very low dimension. Training the neural network on a different learning sample, based on Latin Hypercube Sampling, solves the instability issue. Our results suggest that the design of the learning sample can significantly influence the stability of dynamical systems driven by neural networks.</p>

Author(s):  
Daniela Danciu

Neural networks—both natural and artificial, are characterized by two kinds of dynamics. The first one is concerned with what we would call “learning dynamics”. The second one is the intrinsic dynamics of the neural network viewed as a dynamical system after the weights have been established via learning. The chapter deals with the second kind of dynamics. More precisely, since the emergent computational capabilities of a recurrent neural network can be achieved provided it has suitable dynamical properties when viewed as a system with several equilibria, the chapter deals with those qualitative properties connected to the achievement of such dynamical properties as global asymptotics and gradient-like behavior. In the case of the neural networks with delays, these aspects are reformulated in accordance with the state of the art of the theory of time delay dynamical systems.


2015 ◽  
Vol 13 ◽  
pp. 168-171 ◽  
Author(s):  
Dumitru Bălă

In this paper we present several methods for the study of stability of dynamical systems. We analyze the stability of a hammer modeled by the free vibrator that collides with a sprung elastic mass taking into consideration the viscous damping too.


1987 ◽  
Vol 109 (4) ◽  
pp. 410-413 ◽  
Author(s):  
Norio Miyagi ◽  
Hayao Miyagi

This note applies the direct method of Lyapunov to stability analysis of a dynamical system with multiple nonlinearities. The essential feature of the Lyapunov function used in this note is a non-Lure´ type Lyapunov function which surpasses the Lure´-type Lyapunov function from the point of view of the stability region guaranteed. A modified version of the multivariable Popov criterion is used to construct non-Lure´ type Lyapunov function, which allow for the dynamical sytems with multiple nonlinearities.


Author(s):  
Yifei Zhao ◽  
Yueqiang Liu ◽  
Shuo Wang ◽  
G Z Hao ◽  
Zheng-Xiong Wang ◽  
...  

Abstract The artificial neural networks (NNs) are trained, based on the numerical database, to predict the no-wall and ideal-wall βN limits, due to onset of the n = 1 (n is the toroidal mode number) ideal external kink instability, for the HL-2M tokamak. The database is constructed by toroidal computations utilizing both the equilibrium code CHEASE and the stability code MARS-F. The stability results show that (i) the plasma elongation generally enhances both βN limits, for either positive or negative triangularity plasmas; (ii) the effect is more pronounced for positive triangularity plasmas; (iii) the computed no-wall βN limit linearly scales with the plasma internal inductance, with the proportionality coefficient ranging between 1 and 5 for HL-2M; (iv) the no-wall limit substantially decreases with increasing pressure peaking factor. Furthermore, both the Neural Network (NN) model and the Convolutional Neural Networks model (CNN) are trained and tested, resulting in consistent results. The trained NNs predict both the no-wall and ideal-wall limits with as high as 95% accuracy, compared to those directly computed by the stability code. Additional test cases, produced by the Tokamak Simulation Code (TSC), also show reasonable performance of the trained NNs, with the relative error being within 10%. The constructed database provides effective references for the future HL-2M operations. The trained NNs can be used as a real-time monitor for disruption prevention in the HL-2M experiments, or serve as part of the integrated modeling tools for ideal kink stability analysis.


2015 ◽  
Vol 17 (8) ◽  
pp. 083025 ◽  
Author(s):  
Paul Kirk ◽  
Delphine M Y Rolando ◽  
Adam L MacLean ◽  
Michael P H Stumpf

Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 335 ◽  
Author(s):  
Gani Stamov ◽  
Ivanka Stamova ◽  
Stanislav Simeonov ◽  
Ivan Torlakov

The present paper is devoted to Bidirectional Associative Memory (BAM) Cohen–Grossberg-type impulsive neural networks with time-varying delays. Instead of impulsive discontinuities at fixed moments of time, we consider variable impulsive perturbations. The stability with respect to manifolds notion is introduced for the neural network model under consideration. By means of the Lyapunov function method sufficient conditions that guarantee the stability properties of solutions are established. Two examples are presented to show the validity of the proposed stability criteria.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Simon Schug ◽  
Frederik Benzing ◽  
Angelika Steger

When an action potential arrives at a synapse there is a large probability that no neurotransmitter is released. Surprisingly, simple computational models suggest that these synaptic failures enable information processing at lower metabolic costs. However, these models only consider information transmission at single synapses ignoring the remainder of the neural network as well as its overall computational goal. Here, we investigate how synaptic failures affect the energy efficiency of models of entire neural networks that solve a goal-driven task. We find that presynaptic stochasticity and plasticity improve energy efficiency and show that the network allocates most energy to a sparse subset of important synapses. We demonstrate that stabilising these synapses helps to alleviate the stability-plasticity dilemma, thus connecting a presynaptic notion of importance to a computational role in lifelong learning. Overall, our findings present a set of hypotheses for how presynaptic plasticity and stochasticity contribute to sparsity, energy efficiency and improved trade-offs in the stability-plasticity dilemma.


Sign in / Sign up

Export Citation Format

Share Document