scholarly journals Investigating the role of orientation information in face processing within a spiking neural network

2021 ◽  
Vol 21 (9) ◽  
pp. 2766
Author(s):  
Matthew Bennett ◽  
Tushar Chauhan ◽  
Benoît Cottereau ◽  
Valerie Goffaux

2021 ◽  
Author(s):  
Faramarz Faghihi ◽  
Siqi Cai ◽  
Ahmed Moustafa

Recently, studies have shown that the alpha band (8-13 Hz) EEG signals enable the decoding of auditory spatial attention. However, deep learning methods typically requires a large amount of training data. Inspired by sparse coding in cortical neurons, we propose a spiking neural network model for auditory spatial attention detection. The model is composed of three neural layers, two of them are spiking neurons. We formulate a new learning rule that is based on firing rate of pre synaptic and post-synaptic neurons in the first layer and the second layer of spiking neurons. The third layer consists of 10 spiking neurons that the pattern of their firing rate after training is used in test phase of the method. The proposed method extracts the patterns of recorded EEG of leftward and rightward attention, independently, and uses them to train network to detect the auditory spatial attention. In addition, a computational approach is presented to find the best single-trial EEG data as training samples of leftward and rightward attention EEG. In this model, the role of using low connectivity rate of the layers and specific range of learning parameters in sparse coding is studied. Importantly, unlike most prior model, our method requires 10% of EEG data as training data and has shown 90% accuracy in average. This study suggests new insights into the role of sparse coding in both biological networks and brain-inspired machine learning.







Author(s):  
Bruno Andre Santos ◽  
Rogerio Martins Gomes ◽  
Phil Husbands

AbstractIn general, the mechanisms that maintain the activity of neural systems after a triggering stimulus has been removed are not well understood. Different mechanisms involving at the cellular and network levels have been proposed. In this work, based on analysis of a computational model of a spiking neural network, it is proposed that the spike that occurs after a neuron is inhibited (the rebound spike) can be used to sustain the activity in a recurrent inhibitory neural circuit after the stimulation has been removed. It is shown that, in order to sustain the activity, the neurons participating in the recurrent circuit should fire at low frequencies. It is also shown that the occurrence of a rebound spike depends on a combination of factors including synaptic weights, synaptic conductances and the neuron state. We point out that the model developed here is minimalist and does not aim at empirical accuracy. Its purpose is to raise and discuss theoretical issues that could contribute to the understanding of neural mechanisms underlying self-sustained neural activity.



2018 ◽  
Vol 145 ◽  
pp. 488-494 ◽  
Author(s):  
Aleksandr Sboev ◽  
Alexey Serenko ◽  
Roman Rybka ◽  
Danila Vlasov ◽  
Andrey Filchenkov


2021 ◽  
Vol 1914 (1) ◽  
pp. 012036
Author(s):  
LI Wei ◽  
Zhu Wei-gang ◽  
Pang Hong-feng ◽  
Zhao Hong-yu


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2678
Author(s):  
Sergey A. Lobov ◽  
Alexey I. Zharinov ◽  
Valeri A. Makarov ◽  
Victor B. Kazantsev

Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.



Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1065
Author(s):  
Moshe Bensimon ◽  
Shlomo Greenberg ◽  
Moshe Haiut

This work presents a new approach based on a spiking neural network for sound preprocessing and classification. The proposed approach is biologically inspired by the biological neuron’s characteristic using spiking neurons, and Spike-Timing-Dependent Plasticity (STDP)-based learning rule. We propose a biologically plausible sound classification framework that uses a Spiking Neural Network (SNN) for detecting the embedded frequencies contained within an acoustic signal. This work also demonstrates an efficient hardware implementation of the SNN network based on the low-power Spike Continuous Time Neuron (SCTN). The proposed sound classification framework suggests direct Pulse Density Modulation (PDM) interfacing of the acoustic sensor with the SCTN-based network avoiding the usage of costly digital-to-analog conversions. This paper presents a new connectivity approach applied to Spiking Neuron (SN)-based neural networks. We suggest considering the SCTN neuron as a basic building block in the design of programmable analog electronics circuits. Usually, a neuron is used as a repeated modular element in any neural network structure, and the connectivity between the neurons located at different layers is well defined. Thus, generating a modular Neural Network structure composed of several layers with full or partial connectivity. The proposed approach suggests controlling the behavior of the spiking neurons, and applying smart connectivity to enable the design of simple analog circuits based on SNN. Unlike existing NN-based solutions for which the preprocessing phase is carried out using analog circuits and analog-to-digital conversion, we suggest integrating the preprocessing phase into the network. This approach allows referring to the basic SCTN as an analog module enabling the design of simple analog circuits based on SNN with unique inter-connections between the neurons. The efficiency of the proposed approach is demonstrated by implementing SCTN-based resonators for sound feature extraction and classification. The proposed SCTN-based sound classification approach demonstrates a classification accuracy of 98.73% using the Real-World Computing Partnership (RWCP) database.



Sign in / Sign up

Export Citation Format

Share Document