recurrent connections
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 17)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
pp. 1-28
Author(s):  
Wenrui Zhang ◽  
Peng Li

Abstract As an important class of spiking neural networks (SNNs), recurrent spiking neural networks (RSNNs) possess great computational power and have been widely used for pro cessing sequential data like audio and text. However, most RSNNs suffer from two problems. First, due to the lack of architectural guidance, random recurrent connectivity is often adopted, which does not guarantee good performance. Second, training of RSNNs is in general challenging, bottlenecking achievable model accuracy. To address these problems, we propose a new type of RSNN, skip-connected self-recurrent SNNs (ScSr-SNNs). Recurrence in ScSr-SNNs is introduced by adding self-recurrent connections to spiking neurons. The SNNs with self-recurrent connections can realize recurrent behaviors similar to those of more complex RSNNs, while the error gradients can be more straightforwardly calculated due to the mostly feedforward nature of the network. The network dynamics is enriched by skip connections between nonadjacent layers. Moreover, we propose a new backpropagation (BP) method, backpropagated intrinsic plasticity (BIP), to boost the performance of ScSr-SNNs further by training intrinsic model parameters. Unlike standard intrinsic plasticity rules that adjust the neuron's intrinsic parameters according to neuronal activity, the proposed BIP method optimizes intrinsic parameters based on the backpropagated error gradient of a well-defined global loss function in addition to synaptic weight training. Based on challenging speech, neuromorphic speech, and neuromorphic image data sets, the proposed ScSr-SNNs can boost performance by up to 2.85% compared with other types of RSNNs trained by state-of-the-art BP methods.


Author(s):  
Tianshi Gao ◽  
Bin Deng ◽  
Jixuan Wang ◽  
Jiang Wang ◽  
Guosheng Yi

The regularity of the inter-spike intervals (ISIs) gives a critical window into how the information is coded temporally in the cortex. Previous researches mostly adopt pure feedforward networks (FFNs) to study how the network structure affects spiking regularity propagation, which ignore the role of local dynamics within the layer. In this paper, we construct an FFN with recurrent connections and investigate the propagation of spiking regularity. We argue that an FFN with recurrent connections serves as a basic circuit to explain that the regularity increases as spikes propagate from middle temporal visual areas to higher cortical areas. We find that the reduction of regularity is related to the decreased complexity of the shared activity co-fluctuations. We show in simulations that there is an appropriate excitation–inhibition ratio maximizing the regularity of deeper layers. Furthermore, it is demonstrated that collective temporal regularity in deeper layers exhibits resonance-like behavior with respect to both synaptic connection probability and synaptic weight. Our work provides a critical link between cortical circuit structure and realistic spiking regularity.


2021 ◽  
Author(s):  
Brett W. Larsen ◽  
Shaul Druckmann

AbstractLateral and recurrent connections are ubiquitous in biological neural circuits. The strong computational abilities of feedforward networks have been extensively studied; on the other hand, while certain roles for lateral and recurrent connections in specific computations have been described, a more complete understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Previous key studies by Minsky and later by Roelfsema argued that the sequential, parallel computations for which recurrent networks are well suited can be highly effective approaches to complex computational problems. Such “tag propagation” algorithms perform repeated, local propagation of information and were introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and demonstrate hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to multiple, interacting propagating tags and demonstrate that these are efficient computational substrates for more general computations by introducing and solving an abstracted biologically inspired decision-making task. More generally, our work clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.Author SummaryLateral and recurrent connections are ubiquitous in biological neural circuits; intriguingly, this stands in contrast to the majority of current-day artificial neural network research which primarily uses feedforward architectures except in the context of temporal sequences. This raises the possibility that part of the difference in computational capabilities between real neural circuits and artificial neural networks is accounted for by the role of recurrent connections, and as a result a more detailed understanding of the computational role played by such connections is of great importance. Making effective comparisons between architectures is a subtle challenge, however, and in this paper we leverage the computational capabilities of large-scale machine learning to robustly explore how differences in architectures affect a network’s ability to learn a task. We first focus on the task of determining whether two pixels are connected in an image which has an elegant and efficient recurrent solution: propagate a connected label or tag along paths. Inspired by this solution, we show that it can be generalized in many ways, including propagating multiple tags at once and changing the computation performed on the result of the propagation. To illustrate these generalizations, we introduce an abstracted decision-making task related to foraging in which an animal must determine whether it can avoid predators in a random environment. Our results shed light on the set of computational tasks that can be solved efficiently by recurrent computation and how these solutions may appear in neural activity.


2021 ◽  
Author(s):  
Muneshwar Mehra ◽  
Adarsh Mukesh ◽  
Sharba Bandyopadhyay

ABSTRACTAuditory cortex (ACX) neurons are sensitive to spectro-temporal sound patterns and violations in patterns induced by rare stimuli embedded within streams of sounds. We investigate the auditory cortical representation of repeated presentations of sequences of sounds with standard stimuli (common) with an embedded deviant (rare) stimulus in two conditions – Periodic (Fixed deviant position) or Random (Random deviant position), using extracellular single-unit and 2-photon Ca+2 imaging recordings in Layer 2/3 neurons of the mouse ACX. In the population average, responses increased over repetitions in the Random-condition and were suppressed or did not change in the Periodic-condition, showing irregularity preference. A subset of neurons also showed the opposite behavior, indicating regularity preference. Pairwise noise correlations were higher in Random-condition over Periodic-condition, suggesting the role of recurrent connections. 2-photon Ca+2 imaging of excitatory (EX) and parvalbumin-positive (PV) and somatostatin-positive (SOM) inhibitory neurons, showed different categories of adaptation or change in response over repetitions (categorized by the sign of the slope of change) as observed with single units. However, the examination of functional connectivity between pairs of neurons of different categories showed that EX-PV connections behaved opposite to the EX-EX and EX-SOM pairs that show more functional connections outside category in Random-condition than Periodic-condition. Finally considering Regularity preference, Irregularity preference and no preference categories, showed that EX-EX and EX-SOM connections to be in largely separate functional subnetworks with the different preferences, while EX-PV connections were more spread. Thus separate subnetworks could underly the coding of periodic and random sound sequences.Significance StatementStudying how the ACX neurons respond to streams of sound sequences help us understand the importance of changes in dynamic acoustic noisy scenes around us. Humans and animals are sensitive to regularity and its violations in sound sequences. Psychophysical tasks in humans show that auditory brain differentially responds to periodic and random structures, independent of the listener’s attentional states. Here we show that mouse ACX L2/3 neurons detect a change and respond differentially to changing patterns over long-time scales. The differential functional connectivity profile obtained in response to two different sound contexts, suggest the stronger role of recurrent connections in the auditory cortical network. Furthermore, the excitatory-inhibitory neuronal interactions can contribute to detecting the changing sound patterns.


2020 ◽  
Author(s):  
Hari Teja Kalidindi ◽  
Kevin P. Cross ◽  
Timothy P. Lillicrap ◽  
Mohsen Omrani ◽  
Egidio Falotico ◽  
...  

SummaryRecent studies hypothesize that motor cortical (MC) dynamics are generated largely through its recurrent connections based on observations that MC activity exhibits rotational structure. However, behavioural and neurophysiological studies suggest that MC behaves like a feedback controller where continuous sensory feedback and interactions with other brain areas contribute substantially to MC processing. We investigated these apparently conflicting theories by building recurrent neural networks that controlled a model arm and received sensory feedback about the limb. Networks were trained to counteract perturbations to the limb and to reach towards spatial targets. Network activities and sensory feedback signals to the network exhibited rotational structure even when the recurrent connections were removed. Furthermore, neural recordings in monkeys performing similar tasks also exhibited rotational structure not only in MC but also in somatosensory cortex. Our results argue that rotational structure may reflect dynamics throughout voluntary motor circuits involved in online control of motor actions.HighlightsNeural networks with sensory feedback generate rotational dynamics during simulated posture and reaching tasksRotational dynamics are observed even without recurrent connections in the networkSimilar dynamics are observed not only in motor cortex, but also in somatosensory cortex of non-huma n primates as well as sensory feedback signalsResults highlight rotational dynamics may reflect internal dynamics, external inputs or any combination of the two.


2020 ◽  
Author(s):  
Wen-Hao Zhang ◽  
Si Wu ◽  
Krešimir Josić ◽  
Brent Doiron

AbstractA large part of the synaptic input received by cortical neurons comes from local cortico-cortical connectivity. Despite their abundance, the role of local recurrence in cortical function is unclear, and in simple coding schemes it is often the case that a circuit with no recurrent connections performs optimally. We consider a recurrent excitatory-inhibitory circuit model of a cortical hypercolumn which performs sampling-based Bayesian inference to infer latent hierarchical stimulus features. We show that local recurrent connections can store an internal model of the correlations between stimulus features that are present in the external world. When the resulting recurrent input is combined with feedforward input it produces a population code from which the posterior over the stimulus features can be linearly read out. Internal Poisson spiking variability provides the proper fluctuations for the population to sample stimulus features, yet the resultant population variability is aligned along the stimulus feature direction, producing what are termed differential correlations. Importantly, the amplitude of these internally generated differential correlations is determined by the associative prior in the model stored in the recurrent connections, thus providing experimentally testable predictions for how population connectivity and response variability are connected to the structure of latent external stimuli.


2020 ◽  
Author(s):  
Miguel A. Casal ◽  
Santiago Galella ◽  
Oscar Vilarroya ◽  
Jordi Garcia-Ojalvo

Neuronal networks provide living organisms with the ability to process information. They are also characterized by abundant recurrent connections, which give rise to strong feed-back that dictates their dynamics and endows them with fading (short-term) memory. The role of recurrence in long-term memory, on the other hand, is still unclear. Here we use the neuronal network of the roundworm C. elegans to show that recurrent architectures in living organisms can exhibit long-term memory without relying on specific hard-wired modules. A genetic algorithm reveals that the experimentally observed dynamics of the worm’s neuronal network exhibits maximal complexity (as measured by permutation entropy). In that complex regime, the response of the system to repeated presentations of a time-varying stimulus reveals a consistent behavior that can be interpreted as soft-wired long-term memory.A common manifestation of our ability to remember the past is the consistence of our responses to repeated presentations of stimuli across time. Complex chaotic dynamics is known to produce such reliable responses in spite of its characteristic sensitive dependence on initial conditions. In neuronal networks, complex behavior is known to result from a combination of (i) recurrent connections and (ii) a balance between excitation and inhibition. Here we show that those features concur in the neuronal network of a living organism, namely C. elegans. This enables long-term memory to arise in an on-line manner, without having to be hard-wired in the brain.


Sign in / Sign up

Export Citation Format

Share Document