Recurrent Neural Adaptive Control of Nonlinear Oscillatory Systems Using a Complex-valued Levenberg-Marquardt Learning Algorithm

2015 ◽  
Vol 13 (1-2) ◽  
pp. 10-24
Author(s):  
Ieroham Baruch ◽  
Edmundo P. Reynaud

Abstract In this work, a Recursive Levenberg-Marquardt learning algorithm in the complex domain is developed and applied in the training of two adaptive control schemes composed by Complex-Valued Recurrent Neural Networks. Furthermore, we apply the identification and both control schemes for a particular case of nonlinear, oscillatory mechanical plant to validate the performance of the adaptive neural controller and the learning algorithm. The comparative simulation results show the better performance of the newly proposed Complex-Valued Recursive Levenberg-Marquardt learning algorithm over the gradient-based recursive Back-propagation one.

2012 ◽  
Vol 433-440 ◽  
pp. 3923-3928
Author(s):  
Maryam Sadeghi ◽  
Majid Gholami

Adaptive control is a novel methodology introducing for dynamic identification and control of nonlinear system in case of unknown parameters and absence of precise mathematical model. The Artificial Neural Networks (ANNs) suggest the parallel algorithm in resolving paradigms and result on a robust control fashion in which learning algorithm resembling to the biological brain. Back propagation algorithm is proposed for updating the ANN weighting factors through the on line learning procedure. This research is carry out to investigate the ANN trained algorithm to elaborate the switching angle signals for controlling the Intelligent Universal transformer (IUT) in input and output stages. IUT motive the Advanced Distribution Automation (ADA) with the new invention in automation, management and control. ANN online adaptive scheme is developed for controlling the input current and output voltages of IUT with the major benefits and service option advantages, comprising from a voltage regulation in real time operation, capability on providing three phase power outputs in case of one phase input, energy storage capability,48V DC output option, harmonic Filtering, reliable divers power 240V AC 400HZ for communication usage together with two 240 V AC 60 HZ outputs, automatic sag correction, dynamic system monitoring and system robustness in term of input and load disturbances.


Author(s):  
Salim Miloudi ◽  
Yulin Wang ◽  
Wenjia Ding

Clustering algorithms for multi-database mining (MDM) rely on computing $(n^2-n)/2$ pairwise similarities between $n$ multiple databases to generate and evaluate $m\in[1, (n^2-n)/2]$ candidate clusterings in order to select the ideal partitioning which optimizes a predefined goodness measure. However, when these pairwise similarities are distributed around the mean value, the clustering algorithm becomes indecisive when choosing what database pairs are considered eligible to be grouped together. Consequently, a trivial result is produced by putting all the $n$ databases in one cluster or by returning $n$ singleton clusters. To tackle the latter problem, we propose a learning algorithm to reduce the fuzziness in the similarity matrix by minimizing a weighted binary entropy loss function via gradient descent and back-propagation. As a result, the learned model will improve the certainty of the clustering algorithm by correctly identifying the optimal database clusters. Additionally, in contrast to gradient-based clustering algorithms which are sensitive to the choice of the learning rate and require more iterations to converge, we propose a learning-rate-free algorithm to assess the candidate clusterings generated on the fly in a fewer upper-bounded iterations. Through a series of experiments on multiple database samples, we show that our algorithm outperforms the existing clustering algorithms for MDM.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 553
Author(s):  
Salim Miloudi ◽  
Yulin Wang ◽  
Wenjia Ding

Clustering algorithms for multi-database mining (MDM) rely on computing (n2−n)/2 pairwise similarities between n multiple databases to generate and evaluate m∈[1,(n2−n)/2] candidate clusterings in order to select the ideal partitioning that optimizes a predefined goodness measure. However, when these pairwise similarities are distributed around the mean value, the clustering algorithm becomes indecisive when choosing what database pairs are considered eligible to be grouped together. Consequently, a trivial result is produced by putting all the n databases in one cluster or by returning n singleton clusters. To tackle the latter problem, we propose a learning algorithm to reduce the fuzziness of the similarity matrix by minimizing a weighted binary entropy loss function via gradient descent and back-propagation. As a result, the learned model will improve the certainty of the clustering algorithm by correctly identifying the optimal database clusters. Additionally, in contrast to gradient-based clustering algorithms, which are sensitive to the choice of the learning rate and require more iterations to converge, we propose a learning-rate-free algorithm to assess the candidate clusterings generated on the fly in fewer upper-bounded iterations. To achieve our goal, we use coordinate descent (CD) and back-propagation to search for the optimal clustering of the n multiple database in a way that minimizes a convex clustering quality measure L(θ) in less than (n2−n)/2 iterations. By using a max-heap data structure within our CD algorithm, we optimally choose the largest weight variable θp,q(i) at each iteration i such that taking the partial derivative of L(θ) with respect to θp,q(i) allows us to attain the next steepest descent minimizing L(θ) without using a learning rate. Through a series of experiments on multiple database samples, we show that our algorithm outperforms the existing clustering algorithms for MDM.


2015 ◽  
Vol 792 ◽  
pp. 44-50
Author(s):  
L.E. Kozlova ◽  
E.V. Bolovin

Today, one of the most common ways to control smooth starting and stopping of the induction motors are soft-start system. To ensure such control method the use of closed-speed asynchronous electric drive of TVR-IM type is required. Using real speed sensors is undesirable due to a number of inconveniences exploitation of the drive. The use of the observer based on a neural network is more convenient than the use of the real sensors. Its advantages are robustness, high generalizing properties, lack of requirements to the motor parameters, the relative ease of creation. This article presents the research and selection of the best learning algorithm of the neuroemulator angular velocity of the electric drive of TVR-IM type. There were investigated such learning algorithms as gradient descent back propagation, gradient descent with momentum back propagation, algorithm of Levenberg – Marquardt, scaled conjugate gradient back propagation (SCG). The input parameters of the neuroemulator were the pre treatment signals from the real sensors the stator current and the stator voltage and their delay, as well as a feedback signal from the estimated speed with delay. A comparative analysis of learning algorithms was performed on a simulation model of asynchronous electric drive implemented in software MATLAB Simulink, when the electric drive was running in dynamic mode. The simulation results demonstrate that the best method of learning is algorithm of Levenberg – Marquardt.


2020 ◽  
Vol 71 (6) ◽  
pp. 66-74
Author(s):  
Younis M. Younis ◽  
Salman H. Abbas ◽  
Farqad T. Najim ◽  
Firas Hashim Kamar ◽  
Gheorghe Nechifor

A comparison between artificial neural network (ANN) and multiple linear regression (MLR) models was employed to predict the heat of combustion, and the gross and net heat values, of a diesel fuel engine, based on the chemical composition of the diesel fuel. One hundred and fifty samples of Iraqi diesel provided data from chromatographic analysis. Eight parameters were applied as inputs in order to predict the gross and net heat combustion of the diesel fuel. A trial-and-error method was used to determine the shape of the individual ANN. The results showed that the prediction accuracy of the ANN model was greater than that of the MLR model in predicting the gross heat value. The best neural network for predicting the gross heating value was a back-propagation network (8-8-1), using the Levenberg�Marquardt algorithm for the second step of network training. R = 0.98502 for the test data. In the same way, the best neural network for predicting the net heating value was a back-propagation network (8-5-1), using the Levenberg�Marquardt algorithm for the second step of network training. R = 0.95112 for the test data.


Sign in / Sign up

Export Citation Format

Share Document