Building the Structure and the Neuroemulator Angular Velocity's Learning Algorithm Selection of the Electric Drive of TVR-IM Type

2015 ◽  
Vol 792 ◽  
pp. 44-50
Author(s):  
L.E. Kozlova ◽  
E.V. Bolovin

Today, one of the most common ways to control smooth starting and stopping of the induction motors are soft-start system. To ensure such control method the use of closed-speed asynchronous electric drive of TVR-IM type is required. Using real speed sensors is undesirable due to a number of inconveniences exploitation of the drive. The use of the observer based on a neural network is more convenient than the use of the real sensors. Its advantages are robustness, high generalizing properties, lack of requirements to the motor parameters, the relative ease of creation. This article presents the research and selection of the best learning algorithm of the neuroemulator angular velocity of the electric drive of TVR-IM type. There were investigated such learning algorithms as gradient descent back propagation, gradient descent with momentum back propagation, algorithm of Levenberg – Marquardt, scaled conjugate gradient back propagation (SCG). The input parameters of the neuroemulator were the pre treatment signals from the real sensors the stator current and the stator voltage and their delay, as well as a feedback signal from the estimated speed with delay. A comparative analysis of learning algorithms was performed on a simulation model of asynchronous electric drive implemented in software MATLAB Simulink, when the electric drive was running in dynamic mode. The simulation results demonstrate that the best method of learning is algorithm of Levenberg – Marquardt.

2015 ◽  
Vol 13 (1-2) ◽  
pp. 10-24
Author(s):  
Ieroham Baruch ◽  
Edmundo P. Reynaud

Abstract In this work, a Recursive Levenberg-Marquardt learning algorithm in the complex domain is developed and applied in the training of two adaptive control schemes composed by Complex-Valued Recurrent Neural Networks. Furthermore, we apply the identification and both control schemes for a particular case of nonlinear, oscillatory mechanical plant to validate the performance of the adaptive neural controller and the learning algorithm. The comparative simulation results show the better performance of the newly proposed Complex-Valued Recursive Levenberg-Marquardt learning algorithm over the gradient-based recursive Back-propagation one.


2000 ◽  
Vol 12 (4) ◽  
pp. 881-901 ◽  
Author(s):  
Tom Heskes

Several studies have shown that natural gradient descent for on-line learning is much more efficient than standard gradient descent. In this article, we derive natural gradients in a slightly different manner and discuss implications for batch-mode learning and pruning, linking them to existing algorithms such as Levenberg-Marquardt optimization and optimal brain surgeon. The Fisher matrix plays an important role in all these algorithms. The second half of the article discusses a layered approximation of the Fisher matrix specific to multilayered perceptrons. Using this approximation rather than the exact Fisher matrix, we arrive at much faster “natural” learning algorithms and more robust pruning procedures.


Author(s):  
I.M. Kotsur ◽  
A.V. Hurazda ◽  
B.A. Dolia ◽  
L.E. Shestov

Purpose. Improving the efficiency and energy performance of an asynchronous electric drive for stationary fan’s units of the main ventilation line of mines. Methodology. The research was carried out using the methods of the theory of electrical circuits, mathematical physics, simulation, interpolation and approximation Findings. The research of electromagnetic and energy processes in the asynchronous electric drive system with pulse control at a fan load, taking into account the variable aerodynamic parameters of the main ventilation line of mines. An electric drive system is able to respond with high accuracy and reliability to changes in the aerodynamic parameters of the main ventilation line of mines has been proven. This will also increase the power factor of the electric drive at a fan load up 0.8 to 0.93 p.u., and the efficiency up 92.5% to 94.5%, when regulating in the range of the operating slip of the rotor of the drive fan motor = 0.5 ÷, which, respectively, is on average up 0,25% to 40 higher in comparison with systems of an unregulated electric drive. Recommendations has been developed for the design and rational selection of the rated fan capacity for the main ventilation line to advance the best energy efficiency level of the electric drive. Originality. The research of electro-mechanical, electro-energy power and aerodynamic processes in the dynamic modes of the fan electric drive was carried out. The fan-loaded "induction motor-converter" system has been proven to be self-regulating. It is able to respond with high accuracy and reliability even at low switching frequencies of the power chopper to any changes of the aerodynamic parameters of the main ventilation line of mines. Practical value. Recommendations has been developed for the design and rational selection of the rated fan capacity for the main ventilation line to advance the best energy efficiency level of the electric drive.


1991 ◽  
Vol 02 (04) ◽  
pp. 283-289 ◽  
Author(s):  
Ronny Meir

We discuss the derivation of deterministic learning rules from an underlying stochastic system. We focus on the symmetrically connected Boltzmann machine and show how various approximations give rise to different learning algorithms. In particular, we show how to derive a symmetrized form of the recurrent back propagation learning algorithm from the Boltzmann machine. We also discuss the connection between the different deterministic learning algorithms focusing on the probability distributions from which they originate. It will also be shown that inspite of the fact that two probability distributions have the same moments to any finite order, they give rise to two distinct learning algorithms.


Author(s):  
Bowen Weng ◽  
Huaqing Xiong ◽  
Yingbin Liang ◽  
Wei Zhang

Existing convergence analyses of Q-learning mostly focus on the vanilla stochastic gradient descent (SGD) type of updates. Despite the Adaptive Moment Estimation (Adam) has been commonly used for practical Q-learning algorithms, there has not been any convergence guarantee provided for Q-learning with such type of updates. In this paper, we first characterize the convergence rate for Q-AMSGrad, which is the Q-learning algorithm with AMSGrad update (a commonly adopted alternative of Adam for theoretical analysis). To further improve the performance, we propose to incorporate the momentum restart scheme to Q-AMSGrad, resulting in the so-called Q-AMSGradR algorithm. The convergence rate of Q-AMSGradR is also established. Our experiments on a linear quadratic regulator problem demonstrate that the two proposed Q-learning algorithms outperform the vanilla Q-learning with SGD updates. The two algorithms also exhibit significantly better performance than the DQN learning method over a batch of Atari 2600 games.


Processes ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 295 ◽  
Author(s):  
Jie Yang ◽  
Junhong Zhao ◽  
Lu Lu ◽  
Tingting Pan ◽  
Sidra Jubair

The back-propagation (BP) algorithm is usually used to train convolutional neural networks (CNNs) and has made greater progress in image classification. It updates weights with the gradient descent, and the farther the sample is from the target, the greater the contribution of it to the weight change. However, the influence of samples classified correctly but that are close to the classification boundary is diminished. This paper defines the classification confidence as the degree to which a sample belongs to its correct category, and divides samples of each category into dangerous and safe according to a dynamic classification confidence threshold. Then a new learning algorithm is presented to penalize the loss function with danger samples but not all samples to enable CNN to pay more attention to danger samples and to learn effective information more accurately. The experiment results, carried out on the MNIST dataset and three sub-datasets of CIFAR-10, showed that for the MNIST dataset, the accuracy of Non-improve CNN reached 99.246%, while that of PCNN reached 99.3%; for three sub-datasets of CIFAR-10, the accuracies of Non-improve CNN are 96.15%, 88.93%, and 94.92%, respectively, while those of PCNN are 96.44%, 89.37%, and 95.22%, respectively.


2015 ◽  
Vol 24 (03) ◽  
pp. 1550001 ◽  
Author(s):  
George Rudolph ◽  
Tony Martinez

In the process of selecting a machine learning algorithm to solve a problem, questions like the following commonly arise: (1) Are some algorithms basically the same, or are they fundamentally different? (2) How different? (3) How do we measure that difference? (4) If we want to combine algorithms, what algorithms and combinators should be tried? This research proposes COD (Classifier Output Difference) distance as a diversity metric. COD separates difference from accuracy, COD goes beyond accuracy to consider differences in output behavior as the basis for comparison. The paper extends earlier on COD by giving a basic comparison to other diversity metrics, and by giving an example of using COD data as a predictive model from which to select algorithms to include in an ensemble. COD may fill a niche in metalearning as a predictive aid to selecting algorithms for ensembles and hybrid systems by providing a simple, straightforward, computationally reasonable alternative to other approaches.


Sign in / Sign up

Export Citation Format

Share Document