scholarly journals Exploiting heterogeneity in operational neural networks by synaptic plasticity

Author(s):  
Serkan Kiranyaz ◽  
Junaid Malik ◽  
Habib Ben Abdallah ◽  
Turker Ince ◽  
Alexandros Iosifidis ◽  
...  

AbstractThe recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a generalized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, the default search method to find optimal operators in ONNs, the so-called Greedy Iterative Search (GIS) method, usually takes several training sessions to find a single operator set per layer. This is not only computationally demanding, also the network heterogeneity is limited since the same set of operators will then be used for all neurons in each layer. To address this deficiency and exploit a superior level of heterogeneity, in this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the “Synaptic Plasticity” paradigm that poses the essential learning theory in biological neurons. During training, each operator set in the library can be evaluated by their synaptic plasticity level, ranked from the worst to the best, and an “elite” ONN can then be configured using the top-ranked operator sets found at each hidden layer. Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs and as a result, the performance gap over the CNNs further widens.

Author(s):  
Eduardo Masato Iyoda ◽  
◽  
Hajime Nobuhara ◽  
Kaoru Hirota

A multiplicative neuron model called translated multiplicative neuron (πt-neuron) is proposed. Compared to the traditional π-neuron, the πt-neuron presents 2 advantages: (1) it can generate decision surfaces centered at any point of its input space; and (2) πt-neuron has a meaningful set of adjustable parameters. Learning rules for πt-neurons are derived using the error backpropagation procedure. It is shown that the XOR and N-bit parity problems can be perfectly solved using only 1 πt-neuron, with no need for hidden neurons. The πt-neuron is also evaluated in Hwang's regression benchmark problems, in which neural networks composed of πt-neurons in the hidden layer can perform better than conventional multilayer perceptrons (MLP) in almost all cases: Errors are reduced an average of 58% using about 33% fewer hidden neurons than MLP.


Author(s):  
JIANJUN WANG ◽  
WEIHUA XU ◽  
BIN ZOU

For the three-layer artificial neural networks with trigonometric weights coefficients, the upper bound and lower bound of approximating 2π-periodic pth-order Lebesgue integrable functions [Formula: see text] are obtained in this paper. Theorems we obtained provide explicit equational representations of these approximating networks, the specification for their numbers of hidden-layer units, the lower bound estimation of approximation, and the essential order of approximation. The obtained results not only characterize the intrinsic property of approximation of neural networks, but also uncover the implicit relationship between the precision (speed) and the number of hidden neurons of neural networks.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 147 ◽  
Author(s):  
Jun Ye ◽  
Wenhua Cui

Neural networks are powerful universal approximation tools. They have been utilized for functions/data approximation, classification, pattern recognition, as well as their various applications. Uncertain or interval values result from the incompleteness of measurements, human observation and estimations in the real world. Thus, a neutrosophic number (NsN) can represent both certain and uncertain information in an indeterminate setting and imply a changeable interval depending on its indeterminate ranges. In NsN settings, however, existing interval neural networks cannot deal with uncertain problems with NsNs. Therefore, this original study proposes a neutrosophic compound orthogonal neural network (NCONN) for the first time, containing the NsN weight values, NsN input and output, and hidden layer neutrosophic neuron functions, to approximate neutrosophic functions/NsN data. In the proposed NCONN model, single input and single output neurons are the transmission notes of NsN data and hidden layer neutrosophic neurons are constructed by the compound functions of both the Chebyshev neutrosophic orthogonal polynomial and the neutrosophic sigmoid function. In addition, illustrative and actual examples are provided to verify the effectiveness and learning performance of the proposed NCONN model for approximating neutrosophic nonlinear functions and NsN data. The contribution of this study is that the proposed NCONN can handle the approximation problems of neutrosophic nonlinear functions and NsN data. However, the main advantage is that the proposed NCONN implies a simple learning algorithm, higher speed learning convergence, and higher learning accuracy in indeterminate/NsN environments.


Symmetry ◽  
2018 ◽  
Vol 10 (11) ◽  
pp. 648 ◽  
Author(s):  
Ismoilov Nusrat ◽  
Sung-Bong Jang

Artificial neural networks (ANN) have attracted significant attention from researchers because many complex problems can be solved by training them. If enough data are provided during the training process, ANNs are capable of achieving good performance results. However, if training data are not enough, the predefined neural network model suffers from overfitting and underfitting problems. To solve these problems, several regularization techniques have been devised and widely applied to applications and data analysis. However, it is difficult for developers to choose the most suitable scheme for a developing application because there is no information regarding the performance of each scheme. This paper describes comparative research on regularization techniques by evaluating the training and validation errors in a deep neural network model, using a weather dataset. For comparisons, each algorithm was implemented using a recent neural network library of TensorFlow. The experiment results showed that an autoencoder had the worst performance among schemes. When the prediction accuracy was compared, data augmentation and the batch normalization scheme showed better performance than the others.


2020 ◽  
Vol 9 (1) ◽  
pp. 41-49
Author(s):  
Johanes Roisa Prabowo ◽  
Rukun Santoso ◽  
Hasbi Yasin

House is one aspect of the welfare of society that must be met, because house is the main need for human life besides clothing and food. The condition of the house as a good shelter can be known from the structure and facilities of buildings. This research aims to analyze the classification of house conditions is livable or not livable. The method used is artificial neural networks (ANN). ANN is a system information processing that has characteristics similar to biological neural networks. In this research the optimization method used is the conjugate gradient algorithm. The data used are data of Survei Sosial Ekonomi Nasional (Susenas) March 2018 Kor Keterangan Perumahan for Cilacap Regency. The data is divided into training data and testing data with the proportion that gives the highest average accuracy is 90% for training data and 10% for testing data. The best architecture obtained a model consisting of 8 neurons in input layer, 10 neurons in hidden layer and 1 neuron in output layer. The activation function used are bipolar sigmoid in the hidden layer and binary sigmoid in the output layer. The results of the analysis showed that ANN works very well for classification on house conditions in Cilacap Regency with an average accuracy of 98.96% at the training stage and 97.58% at the testing stage.Keywords: House, Classification, Artificial Neural Networks, Conjugate Gradient


2018 ◽  
Author(s):  
Florence I. Kleberg ◽  
Jochen Triesch

AbstractSynapses between cortical neurons are subject to constant modifications through synaptic plasticity mechanisms, which are believed to underlie learning and memory formation. The strengths of excitatory and inhibitory synapses in the cortex follow a right-skewed long-tailed distribution. Similarly, the firing rates of excitatory and inhibitory neurons also follow a right-skewed long-tailed distribution. How these distributions come about and how they maintain their shape over time is currently not well understood. Here we propose a spiking neural network model that explains the origin of these distributions as a consequence of the interaction of spike-timing dependent plasticity (STDP) of excitatory and inhibitory synapses and a multiplicative form of synaptic normalisation. Specifically, we show that the combination of additive STDP and multiplicative normalisation leads to lognormal-like distributions of excitatory and inhibitory synaptic efficacies as observed experimentally. The shape of these distributions remains stable even if spontaneous fluctuations of synaptic efficacies are added. In the same network, lognormal-like distributions of the firing rates of excitatory and inhibitory neurons result from small variability in the spiking thresholds of individual neurons. Interestingly, we find that variation in firing rates is strongly coupled to variation in synaptic efficacies: neurons with the highest firing rates develop very strong connections onto other neurons. Finally, we define an impact measure for individual neurons and demonstrate the existence of a small group of neurons with an exceptionally strong impact on the network, that arise as a result of synaptic plasticity. In summary, synaptic plasticity and small variability in neuronal parameters underlie a neural oligarchy in recurrent neural networks.Author summaryOur brain’s neural networks are composed of billions of neurons that exchange signals via trillions of synapses. Are these neurons created equal, or do they contribute in similar ways to the network dynamics? Or do some neurons wield much more power than others? Recent experiments have shown that some neurons are much more active than the average neuron and that some synaptic connections are much stronger than the average synaptic connection. However, it is still unclear how these properties come about in the brain. Here we present a neural network model that explains these findings as a result of the interaction of synaptic plasticity mechanisms that modify synapses’ efficacies. The model reproduces recent findings on the statistics of neuronal firing rates and synaptic efficacies and predicts a small class of neurons with exceptionally high impact on the network dynamics. Such neurons may play a key role in brain disorders such as epilepsy.


2019 ◽  
Vol 14 (4) ◽  
Author(s):  
Ana Carolina Moreno Pássaro ◽  
Tainá Maia Mozetic ◽  
Jones Erni Schmitz ◽  
Ivanildo José da Silva ◽  
Tiago Dias Martins ◽  
...  

Abstract This work aimed to evaluate the interaction of human IgG in non-conventional adsorbents based on chitosan and alginate in the absence and presence of Reactive Green, Reactive Blue and Cibacron Blue immobilized as ligands. The adsorption was evaluated at 277, 288, 298 and 310 K using sodium phosphate buffer, pH 7.6, at 25 mmol L−1. The highest adsorption capacity was observed in the experiments performed with no immobilized dye, although all showed adsorption capacity higher than 120 mg g−1. Data modeling was done using Langmuir, Langmuir-Freundlich and Temkin classical nonlinear models, and artificial neural networks (ANN) for comparison. According to the parameters obtained, a possible adsorption in multilayers was observed due to protein-adsorbent and protein-protein interactions, concluding that IgG adsorption process is favorable and spontaneous. Using an ANN structure with 3 hidden neurons (single hidden layer), the MSE (RMSE) for training, test and validation were 13.698 (3.701), 11.206 (3.347) and 7.632 (2.763), respectively, achieving correlation coefficients of 0.999 in all steps. ANN modeling proved to be effective in predicting the adsorption isotherms in addition to overcoming the difficulties caused by experimental errors and/or arising from adsorption phenomenology.


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Carlos A. Escobar ◽  
Ruben Morales-Menendez

Since most manufacturing systems generate only a fewdefects per million of opportunities, rare quality eventdetection is one of the main applications of process monitoringfor quality. Single-hidden-layer feed-forwardneural networks have been successfully applied to performthis task. However, since the best network structureis not known in advance, many models need to be learnedand tested to select a final model with the right numberof hidden neurons. A new three-dimension 3D


2019 ◽  
Vol 25 (4) ◽  
pp. 515-521
Author(s):  
Siamak Boudaghpour ◽  
Hajar Sadat Alizadeh Moghadam ◽  
Mohammadreza Hajbabaie ◽  
Seyed Hamidreza Toliati

Nowadays, due to various pollution sources, it is essential for environmental scientists to monitor water quality. Phytoplanktons form the end of the food chain in water bodies and are one of the most important biological indicators in water pollution studies. Chlorophyll-A, a green pigment, is found in all phytoplankton. Chlorophyll-A concentration indicates phytoplankton biomass directly. Therefore, Chlorophyll-A is an indirect indicator of pollutants, including phosphorus and nitrogen, and their refinement and control are important. The present study, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite images were used to estimate the chlorophyll-A concentration in southern coastal waters in the Caspian Sea. For this purpose, Multi-layer perceptron neural networks (NNs) were applied which contained three and four feed-forward layers. The best three-layer NN has 15 neurons in its hidden layer and the best four-layer one has 5 in each. The three- and four- layer networks both resulted in similar root mean square errors (RMSE), 0.1(<math xmlns="http://www.w3.org/1998/Math/MathML"><mfrac><mrow><mi>&#x3BC;</mi><mi>g</mi></mrow><mi>l</mi></mfrac></math>), however, the four-layer NNs proved superior in terms of R<sup>2</sup> and also required less training data. Accordingly, a four-layer feed-forward NN with 5 neurons in each hidden layer, is the best network structure for estimating Chlorophyll-A concentration in the southern coastal waters of the Caspian Sea.


Sign in / Sign up

Export Citation Format

Share Document