Neural Networks Predictive Controller Using an Adaptive Control Rate

2016 ◽  
pp. 614-633 ◽  
Author(s):  
Ahmed Mnasser ◽  
Faouzi Bouani ◽  
Mekki Ksouri

A model predictive control design for nonlinear systems based on artificial neural networks is discussed. The Feedforward neural networks are used to describe the unknown nonlinear dynamics of the real system. The backpropagation algorithm is used, offline, to train the neural networks model. The optimal control actions are computed by solving a nonconvex optimization problem with the gradient method. In gradient method, the steepest descent is a sensible factor for convergence. Then, an adaptive variable control rate based on Lyapunov function candidate and asymptotic convergence of the predictive controller are proposed. The stability of the closed loop system based on the neural model is proved. In order to demonstrate the robustness of the proposed predictive controller under set-point and load disturbance, a simulation example is considered. A comparison of the control performance achieved with a Levenberg-Marquardt method is also provided to illustrate the effectiveness of the proposed controller.

2014 ◽  
Vol 3 (3) ◽  
pp. 127-147 ◽  
Author(s):  
Ahmed Mnasser ◽  
Faouzi Bouani ◽  
Mekki Ksouri

A model predictive control design for nonlinear systems based on artificial neural networks is discussed. The Feedforward neural networks are used to describe the unknown nonlinear dynamics of the real system. The backpropagation algorithm is used, offline, to train the neural networks model. The optimal control actions are computed by solving a nonconvex optimization problem with the gradient method. In gradient method, the steepest descent is a sensible factor for convergence. Then, an adaptive variable control rate based on Lyapunov function candidate and asymptotic convergence of the predictive controller are proposed. The stability of the closed loop system based on the neural model is proved. In order to demonstrate the robustness of the proposed predictive controller under set-point and load disturbance, a simulation example is considered. A comparison of the control performance achieved with a Levenberg-Marquardt method is also provided to illustrate the effectiveness of the proposed controller.


2011 ◽  
Vol 121-126 ◽  
pp. 4239-4243 ◽  
Author(s):  
Du Jou Huang ◽  
Yu Ju Chen ◽  
Huang Chu Huang ◽  
Yu An Lin ◽  
Rey Chue Hwang

The chromatic aberration estimations of touch panel (TP) film by using neural networks are presented in this paper. The neural networks with error back-propagation (BP) learning algorithm were used to catch the complex relationship between the chromatic aberration, i.e., L.A.B. values, and the relative parameters of TP decoration film. An artificial intelligent (AI) estimator based on neural model for the estimation of physical property of TP film is expected to be developed. From the simulation results shown, the estimations of chromatic aberration of TP film are very accurate. In other words, such an AI estimator is quite promising and potential in commercial using.


Author(s):  
Pin-Lin Liu

The paper deals with the stability problem of neural networks with discrete and leakage interval time-varying delays. Firstly, a novel Lyapunov-Krasovskii functional was constructed based on the neural networks leakage time-varying delay systems model. The delayed decomposition approach (DDA) and integral inequality techniques (IIA) were altogether employed, which can help to estimate the derivative of Lyapunov-Krasovskii functional and effectively extend the application area of the results. Secondly, by taking the lower and upper bounds of time-delays and their derivatives, a criterion on asymptotical was presented in terms of linear matrix inequality (LMI), which can be easily checked by resorting to LMI in Matlab Toolbox. Thirdly, the resulting criteria can be applied for the case when the delay derivative is lower and upper bounded, when the lower bound is unknown, and when no restrictions are cast upon the derivative characteristics. Finally, through numerical examples, the criteria will be compared with relative ones. The smaller delay upper bound was obtained by the criteria, which demonstrates that our stability criterion can reduce the conservatism more efficiently than those earlier ones.


2000 ◽  
Vol 12 (2) ◽  
pp. 451-472 ◽  
Author(s):  
Fation Sevrani ◽  
Kennichi Abe

In this article we present techniques for designing associative memories to be implemented by a class of synchronous discrete-time neural networks based on a generalization of the brain-state-in-a-box neural model. First, we address the local qualitative properties and global qualitative aspects of the class of neural networks considered. Our approach to the stability analysis of the equilibrium points of the network gives insight into the extent of the domain of attraction for the patterns to be stored as asymptotically stable equilibrium points and is useful in the analysis of the retrieval performance of the network and also for design purposes. By making use of the analysis results as constraints, the design for associative memory is performed by solving a constraint optimization problem whereby each of the stored patterns is guaranteed a substantial domain of attraction. The performance of the designed network is illustrated by means of three specific examples.


2002 ◽  
Vol 12 (01) ◽  
pp. 45-67 ◽  
Author(s):  
M. R. MEYBODI ◽  
H. BEIGY

One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (η), momentum factor (α) and steepness parameter (λ). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based solutions to the problem of adaptation of BP algorithm parameters. By interconnection of learning automata to the feedforward neural networks, we use learning automata scheme for adjusting the parameters η, α, and λ based on the observation of random response of the neural networks. One of the important aspects of the proposed schemes is its ability to escape from local minima with high possibility during the training period. The feasibility of proposed methods is shown through simulations on several problems.


Author(s):  
Filip Ponulak

Analysis of the ReSuMe Learning Process For Spiking Neural NetworksIn this paper we perform an analysis of the learning process with the ReSuMe method and spiking neural networks (Ponulak, 2005; Ponulak, 2006b). We investigate how the particular parameters of the learning algorithm affect the process of learning. We consider the issue of speeding up the adaptation process, while maintaining the stability of the optimal solution. This is an important issue in many real-life tasks where the neural networks are applied and where the fast learning convergence is highly desirable.


Sign in / Sign up

Export Citation Format

Share Document