scholarly journals Parallelizing Feed-Forward Artificial Neural Networks on Transputers

1991 ◽  
Vol 20 (369) ◽  
Author(s):  
Svend Jules Fjerdingstad ◽  
Carsten Nørskov Greve

<p>This thesis is about parallelizing the training phase of a feed-forward, artificial neural network. More specifically, we develop and analyze a number of parallelizations of the widely used neural net learning algorithm called <em>back-propagation</em>.</p><p> </p><p>We describe two different strategies for parallelizing the back-propagation algorithm. A number of parallelizations employing these strategies have been implemented on a system of 48 transputers, permitting us to evaluate and analyze their performances based on the results of actual runs.</p>

Author(s):  
Eldon R. Rene ◽  
M. Estefanía López ◽  
María C. Veiga ◽  
Christian Kennes

Due to their inherent robustness, artificial neural network models have proven to be successful and have been used extensively in biological wastewater treatment applications. However, only recently, with the scientific advancements made in biological waste gas treatment systems, the application of neural networks have slowly gained the practical momentum for performance monitoring in this field. Simple neural models, after vigorous training and testing, are able to generalize the results of a wide range of operating conditions, with high prediction accuracy. This chapter gives a fundamental insight and overview of the process mechanism of different biological waste gas (biofilters, biotrickling filters, continuous stirred tank bioreactors and monolith bioreactors), and wastewater treatment systems (activated sludge process, trickling filter and sequencing batch reactors). The basic theory of artificial neural networks is explained with a clear understanding of the back propagation algorithm. A generalized neural network modelling procedure for waste treatment applications is outlined, and the role of back propagation algorithm network parameters is discussed. Anew, the application of neural networks for solving specific environmental problems is presented in the form of a literature review.


Author(s):  
Pooja Yadav ◽  
Atish Sagar

Rainfall prediction is clearly of great importance for any country. One would like to make long term prediction, i.e. predict total monsoon rainfall a few weeks or months and in advance short term prediction, i.e. predict rainfall over different locations a few days in advance [1]. Predicted by using its correlation with observed parameter. Several regression and neural network based models are currently available. While Artificial Neural Network provide a great deal of promise, they also embody much uncertainty [2,3]. In this paper, different artificial neural network models have been created for the rainfall prediction of Uttarakhand region in India. These ANN models were created using training algorithms namely, feed-forward back propagation algorithm [4,5]. The number of neurons for all the models was kept at 10. The mean squared error was measured for each model and the best accuracy was obtained by the feed-forward back propagation algorithm with MSE value as low as 0.00547823.


2013 ◽  
Vol 14 (6) ◽  
pp. 431-439 ◽  
Author(s):  
Issam Hanafi ◽  
Francisco Mata Cabrera ◽  
Abdellatif Khamlichi ◽  
Ignacio Garrido ◽  
José Tejero Manzanares

2017 ◽  
Vol 43 (4) ◽  
pp. 26-32 ◽  
Author(s):  
Sinan Mehmet Turp

AbstractThis study investigates the estimated adsorption efficiency of artificial Nickel (II) ions with perlite in an aqueous solution using artificial neural networks, based on 140 experimental data sets. Prediction using artificial neural networks is performed by enhancing the adsorption efficiency with the use of Nickel (II) ions, with the initial concentrations ranging from 0.1 mg/L to 10 mg/L, the adsorbent dosage ranging from 0.1 mg to 2 mg, and the varying time of effect ranging from 5 to 30 mins. This study presents an artificial neural network that predicts the adsorption efficiency of Nickel (II) ions with perlite. The best algorithm is determined as a quasi-Newton back-propagation algorithm. The performance of the artificial neural network is determined by coefficient determination (R2), and its architecture is 3-12-1. The prediction shows that there is an outstanding relationship between the experimental data and the predicted values.


SAINTEKBU ◽  
2016 ◽  
Vol 1 (1) ◽  
Author(s):  
Wiratmoko Yuwono ◽  
Yodik Iwan Herlambang ◽  
Mauridhi Hery Purnomo ◽  
Prima Kristalina

Application of artificial neural network software ( ANN ) has been implemented forpredicting many thing and replace the conventional ways of predicting method using linearregression. Back Propagation algorithm can be used to reach the result of the program thatcan predict the telephone exchange health grade according to the data that has beenrecorded before. By predicting each parameter that has correlation to the telephoneexchange health grade, we can predict the telephone exchange health grade in the nextperiod.Kata kunci : jaringan syaraf tiruan, propagasi balik, nilai kesehatan sentral.


Author(s):  
Maria Sivak ◽  
◽  
Vladimir Timofeev ◽  

The paper considers the problem of building robust neural networks using different robust loss functions. Applying such neural networks is reasonably when working with noisy data, and it can serve as an alternative to data preprocessing and to making neural network architecture more complex. In order to work adequately, the error back-propagation algorithm requires a loss function to be continuously or two-times differentiable. According to this requirement, two five robust loss functions were chosen (Andrews, Welsch, Huber, Ramsey and Fair). Using the above-mentioned functions in the error back-propagation algorithm instead of the quadratic one allows obtaining an entirely new class of neural networks. For investigating the properties of the built networks a number of computational experiments were carried out. Different values of outliers’ fraction and various numbers of epochs were considered. The first step included adjusting the obtained neural networks, which lead to choosing such values of internal loss function parameters that resulted in achieving the highest accuracy of a neural network. To determine the ranges of parameter values, a preliminary study was pursued. The results of the first stage allowed giving recommendations on choosing the best parameter values for each of the loss functions under study. The second stage dealt with comparing the investigated robust networks with each other and with the classical one. The analysis of the results shows that using the robust technique leads to a significant increase in neural network accuracy and in a learning rate.


Sign in / Sign up

Export Citation Format

Share Document