Parallel Hardware for Artificial Neural Networks Using Fixed Floating Point Representation

Author(s):  
Nadia Nedjah ◽  
Rodrigo Martins da Silva ◽  
Luiza de Macedo Mourelle

Artificial Neural Networks (ANNs) is a well known bio-inspired model that simulates human brain capabilities such as learning and generalization. ANNs consist of a number of interconnected processing units, wherein each unit performs a weighted sum followed by the evaluation of a given activation function. The involved computation has a tremendous impact on the implementation efficiency. Existing hardware implementations of ANNs attempt to speed up the computational process. However, these implementations require a huge silicon area that makes it almost impossible to fit within the resources available on a state-of-the-art FPGAs. In this chapter, a hardware architecture for ANNs that takes advantage of the dedicated adder blocks, commonly called MACs, to compute both the weighted sum and the activation function is devised. The proposed architecture requires a reduced silicon area considering the fact that the MACs come for free as these are FPGA’s built-in cores. Our system uses integer (fixed point) mathematics and operates with fractions to represent real numbers. Hence, floating point representation is not employed and any mathematical computation of the ANN hardware is based on combinational circuitry (performing only sums and multiplications). The hardware is fast because it is massively parallel. Besides, the proposed architecture can adjust itself on-the-fly to the user-defined configuration of the neural network, i.e., the number of layers and neurons per layer of the ANN can be settled with no extra hardware changes. This is a very nice characteristic in robot-like systems considering the possibility of the same hardware may be exploited in different tasks. The hardware also requires another system (a software) that controls the sequence of the hardware computation and provides inputs, weights and biases for the ANN in hardware. Thus, a co-design environment is necessary.

Agriculture ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 567
Author(s):  
Jolanta Wawrzyniak

Artificial neural networks (ANNs) constitute a promising modeling approach that may be used in control systems for postharvest preservation and storage processes. The study investigated the ability of multilayer perceptron and radial-basis function ANNs to predict fungal population levels in bulk stored rapeseeds with various temperatures (T = 12–30 °C) and water activity in seeds (aw = 0.75–0.90). The neural network model input included aw, temperature, and time, whilst the fungal population level was the model output. During the model construction, networks with a different number of hidden layer neurons and different configurations of activation functions in neurons of the hidden and output layers were examined. The best architecture was the multilayer perceptron ANN, in which the hyperbolic tangent function acted as an activation function in the hidden layer neurons, while the linear function was the activation function in the output layer neuron. The developed structure exhibits high prediction accuracy and high generalization capability. The model provided in the research may be readily incorporated into control systems for postharvest rapeseed preservation and storage as a support tool, which based on easily measurable on-line parameters can estimate the risk of fungal development and thus mycotoxin accumulation.


1997 ◽  
Vol 9 (5) ◽  
pp. 1109-1126
Author(s):  
Zhiyu Tian ◽  
Ting-Ting Y. Lin ◽  
Shiyuan Yang ◽  
Shibai Tong

With the progress in hardware implementation of artificial neural networks, the ability to analyze their faulty behavior has become increasingly important to their diagnosis, repair, reconfiguration, and reliable application. The behavior of feedforward neural networks with hard limiting activation function under stuck-at faults is studied in this article. It is shown that the stuck-at-M faults have a larger effect on the network's performance than the mixed stuck-at faults, which in turn have a larger effect than that of stuck-at-0 faults. Furthermore, the fault-tolerant ability of the network decreases with the increase of its size for the same percentage of faulty interconnections. The results of our analysis are validated by Monte-Carlo simulations.


2010 ◽  
Vol 2010 ◽  
pp. 1-7 ◽  
Author(s):  
Reginald B. Silva ◽  
Piero Iori ◽  
Cecilia Armesto ◽  
Hugo N. Bendini

Soil loss is one of the main causes of pauperization and alteration of agricultural soil properties. Various empirical models (e.g., USLE) are used to predict soil losses from climate variables which in general have to be derived from spatial interpolation of point measurements. Alternatively, Artificial Neural Networks may be used as a powerful option to obtain site-specific climate data from independent factors. This study aimed to develop an artificial neural network to estimate rainfall erosivity in the Ribeira Valley and Coastal region of the State of São Paulo. In the development of the Artificial Neural Networks the input variables were latitude, longitude, and annual rainfall and a mathematical equation of the activation function for use in the study area as the output variable. It was found among other things that the Artificial Neural Networks can be used in the interpolation of rainfall erosivity values for the Ribeira Valley and Coastal region of the State of São Paulo to a satisfactory degree of precision in the estimation of erosion. The equation performance has been demonstrated by comparison with the mathematical equation of the activation function adjusted to the specific conditions of the study area.


2003 ◽  
Vol 14 (6) ◽  
pp. 1576-1579 ◽  
Author(s):  
E. Soria-Olivas ◽  
J.D. Martin-Guerrero ◽  
G. Camps-Valls ◽  
A.J. Serrano-Lopez ◽  
J. Calpe-Maravilla ◽  
...  

2020 ◽  
Author(s):  
Dian Ade Kurnia

Artificial neural networks use the same analogy, and process information using artificial neurons.Information is transferred from one artificial neuron to another, which finally leads to an activation function, which acts like a brain and makes a decision


Artificial neural networks of the feed – forward kind, are an established technique under the supervised learning paradigm for the solution of learning tasks. The mathematical result that allows one to assert the usefulness of this technique is that these networks can approximate any continuous function to the desired degree. The requirement imposed on these networks is to have non-linear functions of a specific kind at the hidden nodes of the network. In general, sigmoidal non-linearities, called activation functions, are generally used. In this paper we propose an asymmetric activation function. The networks using the proposed activation function are compared against those using the generally used logistic and the hyperbolic tangent activation function for the solution of 12 function approximation problems. The results obtained allow us to infer that the proposed activation function, in general, reaches deeper minima of the error measures and has better generalization error values.


Sign in / Sign up

Export Citation Format

Share Document