scholarly journals How training and testing histories affect generalization: a test of simple neural networks

2007 ◽  
Vol 362 (1479) ◽  
pp. 449-454 ◽  
Author(s):  
Stefano Ghirlanda ◽  
Magnus Enquist

We show that a simple network model of associative learning can reproduce three findings that arise from particular training and testing procedures in generalization experiments: the effect of (i) ‘errorless learning’, (ii) extinction testing on peak shift, and (iii) the central tendency effect. These findings provide a true test of the network model which was developed to account for other phenomena, and highlight the potential of neural networks to study the phenomena that depend on sequences of experiences with many stimuli. Our results suggest that at least some such phenomena, e.g. stimulus range effects, may derive from basic mechanisms of associative memory rather than from more complex memory processes.

2021 ◽  
Vol 443 ◽  
pp. 222-234
Author(s):  
Jia Liu ◽  
Wenhua Zhang ◽  
Fang Liu ◽  
Liang Xiao

Author(s):  
Serkan Kiranyaz ◽  
Junaid Malik ◽  
Habib Ben Abdallah ◽  
Turker Ince ◽  
Alexandros Iosifidis ◽  
...  

AbstractThe recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a generalized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, the default search method to find optimal operators in ONNs, the so-called Greedy Iterative Search (GIS) method, usually takes several training sessions to find a single operator set per layer. This is not only computationally demanding, also the network heterogeneity is limited since the same set of operators will then be used for all neurons in each layer. To address this deficiency and exploit a superior level of heterogeneity, in this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the “Synaptic Plasticity” paradigm that poses the essential learning theory in biological neurons. During training, each operator set in the library can be evaluated by their synaptic plasticity level, ranked from the worst to the best, and an “elite” ONN can then be configured using the top-ranked operator sets found at each hidden layer. Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs and as a result, the performance gap over the CNNs further widens.


Author(s):  
Y Wang ◽  
P Hu

In this paper, the problem of global robust stability is discussed for uncertain Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with delays. The parameter uncertainties are supposed to be norm bounded. The sufficient conditions for global robust stability are derived by employing a Lyapunov-Krasovskii functional. Based on these, the conditions ensuring global asymptotic stability without parameter uncertainties are established. All conditions are expressed in terms of linear matrix inequalities (LMIs). In addition, two examples are provided to illustrate the effectiveness of the results obtained.


Sign in / Sign up

Export Citation Format

Share Document