Sliding mode backpropagation: control theory applied to neural network learning

Author(s):  
G.G. Parma ◽  
B.R. Menezes ◽  
A.P. Braga
Robotica ◽  
1997 ◽  
Vol 15 (1) ◽  
pp. 23-30 ◽  
Author(s):  
Karel Jezernik ◽  
Miran Rodič ◽  
Riko šafarič ◽  
Boris Curk

This paper develops a method for neural network control design with sliding modes in which robustness is inherent. Neural network control is formulated to become a class of variable structure (VSS) control. Sliding modes are used to determine best values for parameters in neural network learning rules, thereby robustness in learning control can be improved. A switching manifold is prescribed and the phase trajectory is demanded to satisfy both, the reaching condition and the sliding condition for sliding modes.


2021 ◽  
Vol 54 (20) ◽  
pp. 705-710
Author(s):  
Yajie Bao ◽  
Vaishnavi Thesma ◽  
Javad Mohammadpour Velni

Author(s):  
Hoang Hai Nguyen ◽  
Tim Zieger ◽  
Sandra C. Wells ◽  
Anastasia Nikolakopoulou ◽  
Richard D. Braatz ◽  
...  

2011 ◽  
Vol 131 (11) ◽  
pp. 1889-1894
Author(s):  
Yuta Tsuchida ◽  
Michifumi Yoshioka

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


1994 ◽  
Vol 04 (01) ◽  
pp. 23-51 ◽  
Author(s):  
JEROEN DEHAENE ◽  
JOOS VANDEWALLE

A number of matrix flows, based on isospectral and isodirectional flows, is studied and modified for the purpose of local implementability on a network structure. The flows converge to matrices with a predefined spectrum and eigenvectors which are determined by an external signal. The flows can be useful for adaptive signal processing applications and are applied to neural network learning.


Sign in / Sign up

Export Citation Format

Share Document