scholarly journals On Training Of Feed Forward Neural Networks

2007 ◽  
Vol 4 (1) ◽  
pp. 158-164
Author(s):  
Baghdad Science Journal

In this paper we describe several different training algorithms for feed forward neural networks(FFNN). In all of these algorithms we use the gradient of the performance function, energy function, to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements, however non of the above algorithms has a global properties which suited to all problems.

Author(s):  
S. A. Adewusi ◽  
B. O. Al-Bedoor

This paper presents the application of neural networks for rotor cracks detection. The basic working principles of neural networks are presented. Experimental vibration signals of rotors with and without a propagating crack were used to train the Multi-layer Feed-forward Neural Networks using back-propagation algorithm. The trained neural networks were tested with other set of vibration data. A simple two-layer feed-forward neural network with two neurons in the input layer and one neuron in the output layer trained with the signals of a cracked rotor and a normal rotor without a crack was found to be satisfactory in detecting a propagating crack. Trained three-layer networks were able to detect both the propagating and non-propagating cracks. The FFT of the vibration signals showing variation in amplitude of the harmonics as time progresses are also presented for comparison.


1991 ◽  
Vol 20 (369) ◽  
Author(s):  
Svend Jules Fjerdingstad ◽  
Carsten Nørskov Greve

<p>This thesis is about parallelizing the training phase of a feed-forward, artificial neural network. More specifically, we develop and analyze a number of parallelizations of the widely used neural net learning algorithm called <em>back-propagation</em>.</p><p> </p><p>We describe two different strategies for parallelizing the back-propagation algorithm. A number of parallelizations employing these strategies have been implemented on a system of 48 transputers, permitting us to evaluate and analyze their performances based on the results of actual runs.</p>


2013 ◽  
Vol 14 (6) ◽  
pp. 431-439 ◽  
Author(s):  
Issam Hanafi ◽  
Francisco Mata Cabrera ◽  
Abdellatif Khamlichi ◽  
Ignacio Garrido ◽  
José Tejero Manzanares

Author(s):  
Şahin Yildirim ◽  
Sertaç Savaş

The goal of this chapter is to enable a nonholonomic mobile robot to track a specified trajectory with minimum tracking error. Towards that end, an adaptive P controller is designed whose gain parameters are tuned by using two feed-forward neural networks. Back-propagation algorithm is chosen for online learning process and posture-tracking errors are considered as error values for adjusting weights of neural networks. The tracking performance of the controller is illustrated for different trajectories with computer simulation using Matlab/Simulink. In addition, open-loop response of an experimental mobile robot is investigated for these different trajectories. Finally, the performance of the proposed controller is compared to a standard PID controller. The simulation results show that “adaptive P controller using neural networks” has superior tracking performance at adapting large disturbances for the mobile robot.


Perception ◽  
1989 ◽  
Vol 18 (6) ◽  
pp. 793-803 ◽  
Author(s):  
Ian R Moorhead ◽  
Nigel D Haig ◽  
Richard A Clement

The application of theoretical neural networks to preprocessed images was investigated with the aim of developing a computational recognition system. The neural networks were trained by means of a back-propagation algorithm, to respond selectively to computer-generated bars and edges. The receptive fields of the trained networks were then mapped, in terms of both their synaptic weights and their responses to spot stimuli. There was a direct relationship between the pattern of weights on the inputs to the hidden units (the units in the intermediate layer between the input and the output units), and their receptive field as mapped by spot stimuli. This relationship was not sustained at the level of the output units in that their spot-mapped responses failed to correspond either with the weights of the connections from the hidden units to the output units, or with a qualitative analysis of the networks. Part of this discrepancy may be ascribed to the output function used in the back-propagation algorithm.


Sign in / Sign up

Export Citation Format

Share Document