NUMERICAL SIMULATION AND EXPERIMENTAL STUDY OF A HARDWARE PULSE NEURAL NETWORK WITH MEMRISTOR SYNAPSES

Author(s):  
Alexander N. BUSYGIN ◽  
Andrey N. BOBYLEV ◽  
Alexey A. GUBIN ◽  
Alexander D. PISAREV ◽  
Sergey Yu. UDOVICHENKO

This article presents the results of a numerical simulation and an experimental study of the electrical circuit of a hardware spiking perceptron based on a memristor-diode crossbar. That has required developing and manufacturing a measuring bench, the electrical circuit of which consists of a hardware perceptron circuit and an input peripheral electrical circuit to implement the activation functions of the neurons and ensure the operation of the memory matrix in a spiking mode. The authors have performed a study of the operation of the hardware spiking neural network with memristor synapses in the form of a memory matrix in the mode of a single-layer perceptron synapses. The perceptron can be considered as the first layer of a biomorphic neural network that performs primary processing of incoming information in a biomorphic neuroprocessor. The obtained experimental and simulation learning curves show the expected increase in the proportion of correct classifications with an increase in the number of training epochs. The authors demonstrate generating a new association during retraining caused by the presence of new input information. Comparison of the results of modeling and an experiment on training a small neural network with a small crossbar will allow creating adequate models of hardware neural networks with a large memristor-diode crossbar. The arrival of new unknown information at the input of the hardware spiking neural network can be related with the generation of new associations in the biomorphic neuroprocessor. With further improvement of the neural network, this information will be comprehended and, therefore, will allow the transition from weak to strong artificial intelligence.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2678
Author(s):  
Sergey A. Lobov ◽  
Alexey I. Zharinov ◽  
Valeri A. Makarov ◽  
Victor B. Kazantsev

Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.


Electronics ◽  
2019 ◽  
Vol 8 (10) ◽  
pp. 1065 ◽  
Author(s):  
Belyaev ◽  
Velichko

In this paper, we present an electrical circuit of a leaky integrate-and-fire neuron with one VO2 switch, which models the properties of biological neurons. Based on VO2 neurons, a two-layer spiking neural network consisting of nine input and three output neurons is modeled in the SPICE simulator. The network contains excitatory and inhibitory couplings, and implements the winner-takes-all principle in pattern recognition. Using a supervised Spike-Timing-Dependent Plasticity training method and a timing method of information coding, the network was trained to recognize three patterns with dimensions of 3 × 3 pixels. The neural network is able to recognize up to 105 images per second, and has the potential to increase the recognition speed further.


2018 ◽  
Vol 173 ◽  
pp. 01025
Author(s):  
Xi Zhu ◽  
Yi Sun ◽  
Haijun Liu ◽  
Qingjiang Li ◽  
Hui Xu

In order to gain a better understanding of the brain and explore biologically-inspired computation, significant attention is being paid to research into the spike-based neural computation. Spiking neural network (SNN), which is inspired by the understanding of observed biological structure, has been increasingly applied to pattern recognition task. In this work, a single layer SNN architecture based on the characteristics of spiking timing dependent plasticity (STDP) in accordance with the actual test of the device data has been proposed. The device data is derived from the Ag/GeSe/TiN fabricated memristor. The network has been tested on the MNIST dataset, and the classification accuracy attains 90.2%. Furthermore, the impact of device instability on the SNN performance has been discussed, which can propose guidelines for fabricating memristors used for SNN architecture based on STDP characteristics.


Author(s):  
S.A. Lobov

We propose a memory model based on the spiking neural network with Spike-Timing-Dependent Plasticity (STDP). In the model, information is recorded using local external stimulation. The memory decoding is a functional response in the form of population bursts of action potentials synchronized with the applied stimuli. In our model, STDP-mediated weights rearrangements are able to encode the localization of the applied stimulation, while the stimulation focus forms the source of the vector field of synaptic connections. Based on the characteristics of this field, we propose a measure of generalized network memory. With repeated stimulations, we can observe a decrease in time until synchronous activity occurs. In this case, the obtained average learning curve and the dependence of the generalized memory on the stimulation number are characterized by a power-law. We show that the maximum time to reach a functional response is determined by the generalized memory remaining as a result of previous stimulations. Thus, the properly learning curves are due to the presence of incomplete forgetting of previous influences. We study the reliability of generalized network memory, determined by the storage time of memory traces after the termination of external stimulation. The reliability depends on the level of neural noise, and this dependence is also power-law. We found that hubs — neurons that can initiate the generation of population bursts in the absence of noise — play a key role in maintaining generalized network memory. The inclusion of neural noise leads to the occurrence of random bursts initiated by neurons that are not hubs. This noise activity destroys memory traces and reduces the reliability of generalized network memory.


Sign in / Sign up

Export Citation Format

Share Document