ANALYSIS OF TRAINING SET PARALLELISM FOR BACKPROPAGATION NEURAL NETWORKS

1995 ◽  
Vol 06 (01) ◽  
pp. 61-78 ◽  
Author(s):  
FOO SHOU KING ◽  
P. SARATCHANDRAN ◽  
N. SUNDARARAJAN

Training set parallelism and network based parallelism are two popular paradigms for parallelizing a feedforward (artificial) neural network. Training set parallelism is particularly suited to feedforward neural networks with backpropagation learning where the size of the training set is large in relation to the size of the network. This paper analyzes training set parallelism for feedforward neural networks when implemented on a transputer array configured in a pipelined ring topology. Theoretical expressions for the time per epoch (iteration) and optimal size of a processor network are derived when the training set is equally distributed among the processing nodes. These show that the speed up is a function of the number of patterns per processor, communication overhead per epoch and the total number of processors in the topology. Further analysis of how to optimally distribute the training set on a given processor network when the number of patterns in the training set is not an integer multiple of the number of processors, is also carried out. It is shown that optimal allocation of patterns in such cases is a mixed integer programming problem. Using this analysis it is found that equal distribution of training patterns among the processors is not the optimal way to allocate the patterns even when the training set is an integer multiple of the number of processors. Extension of the analysis to processor networks comprising processors of different speeds is also carried out. Experimental results from a T805 transputer array are presented to verify all the theoretical results.

MAUSAM ◽  
2022 ◽  
Vol 53 (2) ◽  
pp. 225-232
Author(s):  
PANKAJ JAIN ◽  
ASHOK KUMAR ◽  
PARVINDER MAINI ◽  
S. V. SINGH

Feedforward Neural Networks are used for daily precipitation forecast using several test stations all over India. The six year European Centre of Medium Range Weather Forecasting (ECMWF) data is used with the training set consisting of the four year data from 1985-1988 and validation set consisting of the data from 1989-1990. Neural networks are used to develop a concurrent relationship between precipitation and other atmospheric variables. No attempt is made to select optimal variables for this study and the inputs are chosen to be same as the ones obtained earlier at National Center for Medium Range Weather Forecasting (NCMRWF) in developing a linear regression model. Neural networks are found to yield results which are atleast as good as linear regression and in several cases yield 10 - 20 % improvement. This is encouraging since the variable selection has so far been optimized for linear regression.


1990 ◽  
Vol 2 (2) ◽  
pp. 198-209 ◽  
Author(s):  
Marcus Frean

A general method for building and training multilayer perceptrons composed of linear threshold units is proposed. A simple recursive rule is used to build the structure of the network by adding units as they are needed, while a modified perceptron algorithm is used to learn the connection strengths. Convergence to zero errors is guaranteed for any boolean classification on patterns of binary variables. Simulations suggest that this method is efficient in terms of the numbers of units constructed, and the networks it builds can generalize over patterns not in the training set.


2020 ◽  
Vol 53 (2) ◽  
pp. 1108-1113
Author(s):  
Magnus Malmström ◽  
Isaac Skog ◽  
Daniel Axehill ◽  
Fredrik Gustafsson

Sign in / Sign up

Export Citation Format

Share Document