A BACKPROPAGATION ALGORITHM FOR A NETWORK OF NEURONS WITH THRESHOLD CONTROLLED SYNAPSES

1991 ◽  
Vol 02 (01n02) ◽  
pp. 135-141 ◽  
Author(s):  
A. Hartstein

Neurons with threshold controlled synapses are easier to implement in VLSI technology than the more commonly studied multiplicative-type synapses. In this paper I derive a backpropagation algorithm which is suitable for networks using this type of neuron. The decision surface obtained from this type of network is composed out of elementary hyperoctahedra centered on each point in decision space. Simulations of a simple two-layer feedforward network are used to show that a network with one hidden layer can learn the logical AND, OR, and XOR functions, and in addition solve the eight-bit parity problem and the four-bit problem.

2001 ◽  
Vol 13 (2) ◽  
pp. 319-326 ◽  
Author(s):  
Hon-Kwok Fung ◽  
Leong Kwan Li

This article presents preliminary research on the general problem of reducing the number of neurons needed in a neural network so that the network can perform a specific recognition task. We consider a single-hidden-layer feedforward network in which only McCulloch-Pitts units are employed in the hidden layer. We show that if only interconnections between adjacent layers are allowed, the minimum size of the hidden layer required to solve the n-bit parity problem is n when n ≤ 4.


Acta Numerica ◽  
1994 ◽  
Vol 3 ◽  
pp. 145-202 ◽  
Author(s):  
S.W. Ellacott

This article starts with a brief introduction to neural networks for those unfamiliar with the basic concepts, together with a very brief overview of mathematical approaches to the subject. This is followed by a more detailed look at three areas of research which are of particular interest to numerical analysts.The first area is approximation theory. IfKis a compact set in ℝn, for somen, then it is proved that a semilinear feedforward network with one hidden layer can uniformly approximate any continuous function inC(K) to any required accuracy. A discussion of known results and open questions on the degree of approximation is included. We also consider the relevance of radial basis functions to neural networks.The second area considered is that of learning algorithms. A detailed analysis of one popular algorithm (the delta rule) will be given, indicating why one implementation leads to a stable numerical process, whereas an initially attractive variant (essentially a form of steepest descent) does not. Similar considerations apply to the backpropagation algorithm. The effect of filtering and other preprocessing of the input data will also be discussed systematically.Finally some applications of neural networks to numerical computation are considered.


2020 ◽  
Vol 15 ◽  
pp. 155892501990083
Author(s):  
Xintong Li ◽  
Honglian Cong ◽  
Zhe Gao ◽  
Zhijia Dong

In this article, thermal resistance test and water vapor resistance test were experimented to obtain data of heat and humidity performance. Canonical correlation analysis was used on determining influence of basic fabric parameters on heat and humidity performance. Thermal resistance model and water vapor resistance model were established with a three-layered feedforward-type neural network. For the generalization of the network and the difficulty of determining the optimal network structure, trainbr was chosen as training algorithm to find the relationship between input factors and output data. After training and verification, the number of hidden layer neurons in the thermal resistance model was 12, and the error reached 10−3. In the water vapor resistance model, the number of hidden layer neurons was 10, and the error reached 10−3.


2019 ◽  
Vol 116 (16) ◽  
pp. 7723-7731 ◽  
Author(s):  
Dmitry Krotov ◽  
John J. Hopfield

It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.


Author(s):  
CHANGHUA YU ◽  
MICHAEL T. MANRY ◽  
JIANG LI

In the neural network literature, many preprocessing techniques, such as feature de-correlation, input unbiasing and normalization, are suggested to accelerate multilayer perceptron training. In this paper, we show that a network trained with an original data set and one trained with a linear transformation of the original data will go through the same training dynamics, as long as they start from equivalent states. Thus preprocessing techniques may not be helpful and are merely equivalent to using a different weight set to initialize the network. Theoretical analyses of such preprocessing approaches are given for conjugate gradient, back propagation and the Newton method. In addition, an efficient Newton-like training algorithm is proposed for hidden layer training. Experiments on various data sets confirm the theoretical analyses and verify the improvement of the new algorithm.


2016 ◽  
Vol 28 (7) ◽  
pp. 1289-1304 ◽  
Author(s):  
Namig J. Guliyev ◽  
Vugar E. Ismailov

The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this note, we consider constructive approximation on any finite interval of [Formula: see text] by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function [Formula: see text] providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of [Formula: see text] at any reasonable point of the real axis.


2008 ◽  
Vol 20 (4) ◽  
pp. 1042-1064
Author(s):  
Maciej Pedzisz ◽  
Danilo P. Mandic

A homomorphic feedforward network (HFFN) for nonlinear adaptive filtering is introduced. This is achieved by a two-layer feedforward architecture with an exponential hidden layer and logarithmic preprocessing step. This way, the overall input-output relationship can be seen as a generalized Volterra model, or as a bank of homomorphic filters. Gradient-based learning for this architecture is introduced, together with some practical issues related to the choice of optimal learning parameters and weight initialization. The performance and convergence speed are verified by analysis and extensive simulations. For rigor, the simulations are conducted on artificial and real-life data, and the performances are compared against those obtained by a sigmoidal feedforward network (FFN) with identical topology. The proposed HFFN proved to be a viable alternative to FFNs, especially in the critical case of online learning on small- and medium-scale data sets.


2012 ◽  
Vol 23 (12) ◽  
pp. 1974-1986 ◽  
Author(s):  
Zhihong Man ◽  
Kevin Lee ◽  
Dianhui Wang ◽  
Zhenwei Cao ◽  
Suiyang Khoo

2021 ◽  
Vol 5 (1) ◽  
pp. 90
Author(s):  
Miftahul Falah ◽  
Dian Palupi Rini ◽  
Iwan Pahendra

Predicting disease is usually done based on the experience and knowledge of the doctor. Diagnosis of such a disease is traditionally less effective. The development of medical diagnosis based on machine learning in terms of disease prediction provides a more accurate diagnosis than the traditional way. In terms of predicting disease can use artificial neural networks. The artificial neural network consists of various algorithms, one of which is the Backpropagation Algorithm. In this paper it is proposed that disease prediction systems use the Backpropagation algorithm. Backpropagation algorithms are often used in disease prediction, but the Backpropagation algorithm has a slight drawback that tends to take a long time in obtaining optimum accuracy values. Therefore, a combination of algorithms can overcome the shortcomings of the Backpropagation algorithm by using the success of the Gravitational Search Algorithm (GSA) algorithm, which can overcome the slow convergence and local minimum problems contained in the Backpropagation algorithm. So the authors propose to combine the Backpropagation algorithm using the Gravitational Search Algorithm (GSA) in hopes of improving accuracy results better than using only the Backpropagation algorithm. The results resulted in a higher level of accuracy with the same number of iterations than using Backpropagation only. Can be seen in the first trial of breast cancer data with parameters namely hidden layer 5, learning rate of 2 and iteration as much as 5000 resulting in accuracy of 99.3 % with error 0.7% on Backpropagation Algorithm, while in combination BP & GSA got accuracy of 99.68 % with error of 0.32%.


Author(s):  
Castro Gbememali Hounmenou ◽  
Boris Milognon Behingan ◽  
Christophe Chrysostome ◽  
Kossi Essona Gneyou ◽  
Romain Lucas Glele Kakaï

Missing observations constitute one of the most important issues in data analysis in applied research studies. The magnitude and their structure impact parameters estimation in the modeling with important consequences for decision-making. This study aims to evaluate the efficiency of imputation methods combined with the backpropagation algorithm in a nonlinear regression context. The evaluation is conducted through a simulation study including sample sizes (50, 100, 200, 300 and 400) with different missing data rates (10, 20, 30 40 and 50%) and three missingness mechanisms (MCAR, MAR and MNAR). Four imputation methods (Last Observation Carried Forward, Random Forest, Amelia and MICE) were used to impute datasets before making prediction with backpropagation. 3-MLP model was used by varying the activation functions (Logistic-Linear, Logistic-Exponential, TanH-Linear and TanH-Exponentiel), the number of nodes in the hidden layer (3 - 15) and the learning rate (20 - 70%). Analysis of the performance criteria (R2, r and RMSE) of the network revealed good performances when it is trained with TanH-Linear functions, 11 nodes in the hidden layer and a learning rate of 50%. MICE and Random Forest were the most appropriate for data imputation. These methods can support up to 50% of missing rate with an optimal sample size of 200.


Sign in / Sign up

Export Citation Format

Share Document