Supervised Neural Learning Algorithms

2005 ◽  
pp. 197-235
2012 ◽  
Vol 9 (2) ◽  
pp. 17 ◽  
Author(s):  
K Sedhuraman ◽  
S Himavathi ◽  
A Muthuramalingam

In this paper, a novel reactive power based model reference neural learning adaptive system (RP-MRNLAS) is proposed. The model reference adaptive system (MRAS) based speed estimation is one of the most popular methods used for sensor-less controlled induction motor drives. In conventional MRAS, the error adaptation is done using a Proportional-integral-(PI). The non-linear mapping capability of a neural network (NN) and the powerful learning algorithms have increased the applications of NN in power electronics and drives. Thus, a neural learning algorithm is used for the adaptation mechanism in MRAS and is often referred to as a model reference neural learning adaptive system (MRNLAS). In MRNLAS, the error between the reference and neural learning adaptive models is back propagated to adjust the weights of the neural network for rotor speed estimation. The two different methods of MRNLAS are flux based (RF-MRNLAS) and reactive power based (RP-MRNLAS). The reactive power- based methods are simple and free from integral equations as compared to flux based methods. The advantage of the reactive power based method and the NN learning algorithms are exploited in this work to yield a RPMRNLAS. The performance of the proposed RP-MRNLAS is analyzed extensively. The proposed RP-MRNLAS is compared in terms of accuracy and integrator drift problems with popular rotor flux-based MRNLAS for the same system and validated through Matlab/Simulink. The superiority of the RP- MRNLAS technique is demonstrated 


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

In this chapter we will look at supervised learning in more detail, beginning with one of the simplest (and earliest) supervised neural learning algorithms – the Delta Rule. The objectives of this chapter are to provide a solid grounding in the theory and practice of problem solving with artificial neural networks – and an appreciation of some of the challenges and practicalities involved in their use.


2000 ◽  
Vol 10 (03) ◽  
pp. 227-241 ◽  
Author(s):  
OMER F. RANA

Neural learning algorithms generally involve a number of identical processing units, which are fully or partially connected, and involve an update function, such as a ramp, a sigmoid or a Gaussian function for instance. Some variations also exist, where units can be heterogeneous, or where an alternative update technique is employed, such as a pulse stream generator. Associated with connections are numerical values that must be adjusted using a learning rule, and and dictated by parameters that are learning rule specific, such as momentum, a learning rate, a temperature, amongst others. Usually, neural learning algorithms involve local updates, and a global interaction between units is often discouraged, except in instances where units are fully connected, or involve synchronous updates. In all of these instances, concurrency within a neural algorithm cannot be fully exploited without a suitable implementation strategy. A design scheme is described for translating a neural learning algorithm from inception to implementation on a parallel machine using PVM or MPI libraries, or onto programmable logic such as FPGAs. A designer must first describe the algorithm using a specialised Neural Language, from which a Petri net (PN) model is constructed automatically for verification, and building a performance model. The PN model can be used to study issues such as synchronisation points, resource sharing and concurrency within a learning rule. Specialised constructs are provided to enable a designer to express various aspects of a learning rule, such as the number and connectivity of neural nodes, the interconnection strategies, and information flows required by the learning algorithm. A scheduling and mapping strategy is then used to translate this PN model onto a multiprocessor template. We demonstrate our technique using a Kohonen and backpropagation learning rules, implemented on a loosely coupled workstation cluster, and a dedicated parallel machine, with PVM libraries.


1989 ◽  
Vol 1 (3) ◽  
pp. 231-253 ◽  
Author(s):  
JUDE W. SHAVLIK ◽  
GEOFFREY G. TOWELL

Sign in / Sign up

Export Citation Format

Share Document