Reference Governors Based on Online Learning of Maximal Output Admissible Set

Author(s):  
Manuel Lanchares ◽  
Ilya Kolmanovsky ◽  
Anouck Girard ◽  
Denise Rizzo

Abstract Reference governors are add-on control schemes that modify the reference commands, if it becomes necessary, in order to avoid constraint violations. To implement a reference governor, explicit knowledge of a model of the system and its constraints is typically required. In this paper, a reference governor which does not require an explicit model of the system or constraints is presented. It constructs an approximation of the maximal output admissible set, as the system operates, using online neural network learning. This approximation is used to modify the reference command in order to satisfy the constraints. The potential of the algorithm is demonstrated through simulations for an electric vehicle and an agile positioning system.

1996 ◽  
Vol 8 (4) ◽  
pp. 383-391
Author(s):  
Ju-Jang Lee ◽  
◽  
Sung-Woo Kim ◽  
Kang-Bark Park

Among various neural network learning control schemes, feedback error learning(FEL)8),9) has been known that it has advantages over other schemes. However, such advantages are founded on the assumption that the systems is linearly parameterized and stable. Thus, FEL has difficulties in coping with uncertain and unstable systems. Furthermore, it is not clear how the learning rule of FEL is obtained in the minimization sense. Therefore, to overcome such problems, we propose neural network control schemes using FEL with guaranteed performance. The proposed strategy is to use multi-layer neural networks, to design a stabilityguaranteeing controller(SGC), and to derive a learning rule to obtain the tracking performance. Using multilayer neural networks we can fully utilize the learning capability no matter how the system is linearly parameterized or not. The SGC makes it possible for the neural network to learn without fear of instability. As a result, the more the neural network learning proceeds, the better the tracking performance becomes.


2011 ◽  
Vol 131 (11) ◽  
pp. 1889-1894
Author(s):  
Yuta Tsuchida ◽  
Michifumi Yoshioka

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


1994 ◽  
Vol 04 (01) ◽  
pp. 23-51 ◽  
Author(s):  
JEROEN DEHAENE ◽  
JOOS VANDEWALLE

A number of matrix flows, based on isospectral and isodirectional flows, is studied and modified for the purpose of local implementability on a network structure. The flows converge to matrices with a predefined spectrum and eigenvectors which are determined by an external signal. The flows can be useful for adaptive signal processing applications and are applied to neural network learning.


Sign in / Sign up

Export Citation Format

Share Document