scholarly journals HOW A PEAK CAN BE SEARCHED FOR IN AN ALMOST EVERYWHERE FLATLAND OF ALTITUDE ZERO? — TINY FLAT ISLAND IN HUGE LAKE

2014 ◽  
pp. 32-37
Author(s):  
Akira Imada

We are exploring a weight configuration space searching for solutions to make our neural network with spiking neurons do some tasks. For the task of simulating an associative memory model, we have already known one such solution — a weight configuration learned a set of patterns using Hebb’s rule, and we guess we have many others which we have not known so far. In searching for such solutions, we observed that the so-called fitness landscape was almost everywhere completely flatland of altitude zero in which the Hebbian weight configuration is the only unique peak, and in addition, the sidewall of the peak is not gradient at all. In such circumstances how could we search for the other peaks? This paper is a call for challenges to the problem.

2012 ◽  
Vol 391 (3) ◽  
pp. 843-848 ◽  
Author(s):  
Everton J. Agnes ◽  
Rubem Erichsen ◽  
Leonardo G. Brunnet

2014 ◽  
pp. 8-15
Author(s):  
M. Kamrul Islam

In neural networks, the associative memory is one in which applying some input pattern leads to the response of a corresponding stored pattern. During the learning phase the memory is fed with a number of input vectors and in the recall phase when some known input is presented to it, the network recalls and reproduces the output vector. Here, we improve and increase the storing ability of the memory model proposed in [1]. We show that there are certain instances where their algorithm can not produce the desired performance by retrieving exactly the correct vector. That is, in their algorithm, a number of output vectors can become activated from the stimulus of an input vector while the desired output is just a single vector. Our proposed solution overcomes this and uniquely determines the output vector as some input vector is applied. Thus we provide a more general scenario of this neural network memory model consisting of Competitive Cooperative Neurons (CCNs).


2005 ◽  
Vol 18 (5-6) ◽  
pp. 666-673 ◽  
Author(s):  
Kazuko Itoh ◽  
Hiroyasu Miwa ◽  
Hideaki Takanobu ◽  
Atsuo Takanishi

2010 ◽  
Vol 22 (2) ◽  
pp. 448-466 ◽  
Author(s):  
J. M. Cortes ◽  
A. Greve ◽  
A. B. Barrett ◽  
M. C. W. van Rossum

When presented with an item or a face, one might have a sense of recognition without the ability to recall when or where the stimulus has been encountered before. This sense of recognition is called familiarity memory. Following previous computational studies of familiarity memory, we investigate the dynamical properties of familiarity discrimination and contrast two different familiarity discriminators: one based on the energy of the neural network and the other based on the time derivative of the energy. We show how the familiarity signal decays rapidly after stimulus presentation. For both discriminators, we calculate the capacity using mean field analysis. Compared to recall capacity (the classical associative memory in Hopfield nets), both the energy and the slope discriminators have bigger capacity, yet the energy-based discriminator has a higher capacity than one based on its time derivative. Finally, both discriminators are found to have a different noise dependence.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1065
Author(s):  
Moshe Bensimon ◽  
Shlomo Greenberg ◽  
Moshe Haiut

This work presents a new approach based on a spiking neural network for sound preprocessing and classification. The proposed approach is biologically inspired by the biological neuron’s characteristic using spiking neurons, and Spike-Timing-Dependent Plasticity (STDP)-based learning rule. We propose a biologically plausible sound classification framework that uses a Spiking Neural Network (SNN) for detecting the embedded frequencies contained within an acoustic signal. This work also demonstrates an efficient hardware implementation of the SNN network based on the low-power Spike Continuous Time Neuron (SCTN). The proposed sound classification framework suggests direct Pulse Density Modulation (PDM) interfacing of the acoustic sensor with the SCTN-based network avoiding the usage of costly digital-to-analog conversions. This paper presents a new connectivity approach applied to Spiking Neuron (SN)-based neural networks. We suggest considering the SCTN neuron as a basic building block in the design of programmable analog electronics circuits. Usually, a neuron is used as a repeated modular element in any neural network structure, and the connectivity between the neurons located at different layers is well defined. Thus, generating a modular Neural Network structure composed of several layers with full or partial connectivity. The proposed approach suggests controlling the behavior of the spiking neurons, and applying smart connectivity to enable the design of simple analog circuits based on SNN. Unlike existing NN-based solutions for which the preprocessing phase is carried out using analog circuits and analog-to-digital conversion, we suggest integrating the preprocessing phase into the network. This approach allows referring to the basic SCTN as an analog module enabling the design of simple analog circuits based on SNN with unique inter-connections between the neurons. The efficiency of the proposed approach is demonstrated by implementing SCTN-based resonators for sound feature extraction and classification. The proposed SCTN-based sound classification approach demonstrates a classification accuracy of 98.73% using the Real-World Computing Partnership (RWCP) database.


Sign in / Sign up

Export Citation Format

Share Document