scholarly journals Mean-square exponential input-to-state stability of stochastic inertial neural networks

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Wentao Wang ◽  
Wei Chen

AbstractBy introducing some parameters perturbed by white noises, we propose a class of stochastic inertial neural networks in random environments. Constructing two Lyapunov–Krasovskii functionals, we establish the mean-square exponential input-to-state stability on the addressed model, which generalizes and refines the recent results. In addition, an example with numerical simulation is carried out to support the theoretical findings.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jingwei Liu ◽  
Peixuan Li ◽  
Xuehan Tang ◽  
Jiaxin Li ◽  
Jiaming Chen

AbstractArtificial neural networks (ANN) which include deep learning neural networks (DNN) have problems such as the local minimal problem of Back propagation neural network (BPNN), the unstable problem of Radial basis function neural network (RBFNN) and the limited maximum precision problem of Convolutional neural network (CNN). Performance (training speed, precision, etc.) of BPNN, RBFNN and CNN are expected to be improved. Main works are as follows: Firstly, based on existing BPNN and RBFNN, Wavelet neural network (WNN) is implemented in order to get better performance for further improving CNN. WNN adopts the network structure of BPNN in order to get faster training speed. WNN adopts the wavelet function as an activation function, whose form is similar to the radial basis function of RBFNN, in order to solve the local minimum problem. Secondly, WNN-based Convolutional wavelet neural network (CWNN) method is proposed, in which the fully connected layers (FCL) of CNN is replaced by WNN. Thirdly, comparative simulations based on MNIST and CIFAR-10 datasets among the discussed methods of BPNN, RBFNN, CNN and CWNN are implemented and analyzed. Fourthly, the wavelet-based Convolutional Neural Network (WCNN) is proposed, where the wavelet transformation is adopted as the activation function in Convolutional Pool Neural Network (CPNN) of CNN. Fifthly, simulations based on CWNN are implemented and analyzed on the MNIST dataset. Effects are as follows: Firstly, WNN can solve the problems of BPNN and RBFNN and have better performance. Secondly, the proposed CWNN can reduce the mean square error and the error rate of CNN, which means CWNN has better maximum precision than CNN. Thirdly, the proposed WCNN can reduce the mean square error and the error rate of CWNN, which means WCNN has better maximum precision than CWNN.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 815 ◽  
Author(s):  
Usa Humphries ◽  
Grienggrai Rajchakit ◽  
Pramet Kaewmesri ◽  
Pharunyou Chanthorn ◽  
Ramalingam Sriraman ◽  
...  

In this paper, we study the mean-square exponential input-to-state stability (exp-ISS) problem for a new class of neural network (NN) models, i.e., continuous-time stochastic memristive quaternion-valued neural networks (SMQVNNs) with time delays. Firstly, in order to overcome the difficulties posed by non-commutative quaternion multiplication, we decompose the original SMQVNNs into four real-valued models. Secondly, by constructing suitable Lyapunov functional and applying It o ^ ’s formula, Dynkin’s formula as well as inequity techniques, we prove that the considered system model is mean-square exp-ISS. In comparison with the conventional research on stability, we derive a new mean-square exp-ISS criterion for SMQVNNs. The results obtained in this paper are the general case of previously known results in complex and real fields. Finally, a numerical example has been provided to show the effectiveness of the obtained theoretical results.


2016 ◽  
Vol 2016 ◽  
pp. 1-19 ◽  
Author(s):  
Chuangxia Huang ◽  
Jie Cao ◽  
Peng Wang

We address the problem of stochastic attractor and boundedness of a class of switched Cohen-Grossberg neural networks (CGNN) with discrete and infinitely distributed delays. With the help of stochastic analysis technology, the Lyapunov-Krasovskii functional method, linear matrix inequalities technique (LMI), and the average dwell time approach (ADT), some novel sufficient conditions regarding the issues of mean-square uniformly ultimate boundedness, the existence of a stochastic attractor, and the mean-square exponential stability for the switched Cohen-Grossberg neural networks are established. Finally, illustrative examples and their simulations are provided to illustrate the effectiveness of the proposed results.


2013 ◽  
Vol 760-762 ◽  
pp. 1742-1747
Author(s):  
Jin Fang Han

This paper is concerned with the mean-square exponential stability analysis problem for a class of stochastic interval cellular neural networks with time-varying delay. By using the stochastic analysis approach, employing Lyapunov function and norm inequalities, several mean-square exponential stability criteria are established in terms of the formula and Razumikhin theorem to guarantee the stochastic interval delayed cellular neural networks to be mean-square exponential stable. Some recent results reported in the literatures are generalized. A kind of equivalent description for this stochastic interval cellular neural networks with time-varying delay is also given.


2012 ◽  
Vol 2012 ◽  
pp. 1-20
Author(s):  
Ting Wang ◽  
Tao Li ◽  
Mingxiang Xue ◽  
Shumin Fei

Together with the Lyapunov-Krasovskii functional approach and an improved delay-partitioning idea, one novel sufficient condition is derived to guarantee a class of delayed neural networks to be asymptotically stable in the mean-square sense, in which the probabilistic variable delay and both of delay variation limits can be measured. Through combining the reciprocal convex technique and convex technique one, the criterion is presented via LMIs and its solvability heavily depends on the sizes of both time-delay range and its variations, which can become much less conservative than those present ones by thinning the delay intervals. Finally, it can be demonstrated by four numerical examples that our idea reduces the conservatism more effectively than some earlier reported ones.


2012 ◽  
Vol 16 (2) ◽  
pp. 357-363 ◽  
Author(s):  
Peng Guo ◽  
Changpin Li ◽  
Fanhai Zeng

In this paper, we study the fractional Langevin equation, whose derivative is in Caputo sense. By using the derived numerical algorithm, we obtain the displacement and the mean square displacement which describe the dynamic behaviors of the fractional Langevin equation.


Sign in / Sign up

Export Citation Format

Share Document