scholarly journals Sequential Temporal Anticipation Characterized by Neural Power Modulation and in Recurrent Neural Networks

2021 ◽  
Author(s):  
Xiangbin Teng ◽  
Ru-Yuan Zhang

Complex human behaviors involve perceiving continuous stimuli and planning actions at sequential time points, such as in perceiving/producing speech and music. To guide adaptive behavior, the brain needs to internally anticipate a sequence of prospective moments. How does the brain achieve this sequential temporal anticipation without relying on any external timing cues? To answer this question, we designed a premembering task: we tagged three temporal locations in white noise by asking human listeners to detect a tone presented at one of the temporal locations. We selectively probed the anticipating processes guided by memory in trials with only flat noise using novel modulation analyses. A multiscale anticipating scheme was revealed: the neural power modulation in the delta band encodes noise duration on a supra-second scale; the modulations in the alpha-beta band range mark the tagged temporal locations on a subsecond scale and correlate with tone detection performance. To unveil the functional role of those neural observations, we turned to recurrent neural networks (RNNs) optimized for the behavioral task. The RNN hidden dynamics resembled the neural modulations; further analyses and perturbations on RNNs suggest that the neural power modulations in the alpha/beta band emerged as a result of selectively suppressing irrelevant noise periods and increasing sensitivity to the anticipated temporal locations. Our neural, behavioral, and modelling findings convergingly demonstrate that the sequential temporal anticipation involves a process of dynamic gain control: to anticipate a few meaningful moments is also to actively ignore irrelevant events that happen most of the time.

Author(s):  
M. Hresko

The object of the study was to determine the role of individual typological features in the perception of the radiation threat. A retrospective and comparative analysis of the psychometric and neurophysiological parameters of participants in the liquidation of the Chernobyl accident (liquidators) was conducted. In the post-accident period the liquidators have a deformation of the personality, which is characterized by the growth of introversion and neuroticism, the sharpening of the character traits of the personality and the increase in the number of accentuations of the emotiveness, pedantry, anxiety, cyclothymia, dysthymia and excitability type. The neurophysiological basis of individual-typological characteristics of personality is defined. Which consists in the growth of the relative and absolute spectral power of the delta band, a decrease in the relative and absolute spectral power of the beta band and a decrease in the dominant frequency. Such changes may indicate an organic lesion of the brain, mainly in the cortico-limbic system. The radiation dose positively correlated with the relative spectral power of the delta and theta band, negatively with the relative and absolute spectral power of the beta band and the dominant frequency. Such features of personality as extraversion, neuroticism, cyclothymia and excitability are positively correlated with the dose of irradiation. At the liquidators, the perception of radiation factors is inadequate: diseases associated with the action of ionizing radiation occupy the fifth rank, the danger from the presence of radiation in the air is rank 8, while the dangerous factors "smoking" and "alcohol use" occupy the last ranked places. The hypertrophied perception of the radiation threat positively correlates with personality traits, such as emotiveness, pedantry, demonstrativeness, anxiety and exaltation.


2006 ◽  
Vol 29 (1) ◽  
pp. 81-81
Author(s):  
Ralph-Axel Müller

Although van der Velde's de Kamps's (vdV&dK) attempt to put syntactic processing into a broader context of combinatorial cognition is promising, their coverage of neuroscientific evidence is disappointing. Neither their case against binding by temporal coherence nor their arguments against recurrent neural networks are compelling. As an alternative, vdV&dK propose a blackboard model that is based on the assumption of special processors (e.g., lexical versus grammatical), but evidence from the cognitive neuroscience of language, which is, overall, less than supportive of such special processors, is not considered. As a consequence, vdV&dK's may be a clever model of syntactic processing, but it remains unclear how much we can learn from it with regard to biologically based human language.


1993 ◽  
Vol 03 (02) ◽  
pp. 279-291 ◽  
Author(s):  
B. DOYON ◽  
B. CESSAC ◽  
M. QUOY ◽  
M. SAMUELIDES

The occurrence of chaos in recurrent neural networks is supposed to depend on the architecture and on the synaptic coupling strength. It is studied here for a randomly diluted architecture. We produce a bifurcation parameter independent of the connectivity that allows a sustained activity and the occurrence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. Moreover the route towards chaos is numerically checked to be a quasiperiodic one, whatever the type of the first bifurcation is. In the discussion, we connect these results to some recent theoretical results about highly diluted networks. Hints are provided for further investigations to elicit the role of chaotic dynamics in the cognitive processes of the brain.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 2416 ◽  
Author(s):  
András Szilágyi ◽  
István Zachar ◽  
Anna Fedor ◽  
Harold P. de Vladar ◽  
Eörs Szathmáry

Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


2018 ◽  
Author(s):  
Patrick Krauss ◽  
Marc Schuster ◽  
Verena Dietrich ◽  
Achim Schilling ◽  
Holger Schulze ◽  
...  

AbstractRecurrent neural networks are complex non-linear systems, capable of ongoing activity in the absence of driving inputs. The dynamical properties of these systems, in particular their long-time attractor states, are determined on the microscopic level by the connection strengths wij between the individual neurons. However, little is known to which extent network dynamics is tunable on a more coarse-grained level by the statistical features of the weight matrix. In this work, we investigate the dynamical impact of three statistical parameters: density (the fraction of non-zero connections), balance (the ratio of excitatory to inhibitory connections), and symmetry (the fraction of neuron pairs with wij = wji). By computing a ‘phase diagram’ of network dynamics, we find that balance is the essential control parameter: Its gradual increase from negative to positive values drives the system from oscillatory behavior into a chaotic regime, and eventually into stationary fix points. Only directly at the border of the chaotic regime do the neural networks display rich but regular dynamics, thus enabling actual information processing. These results suggest that the brain, too, is fine-tuned to the ‘edge of chaos’ by assuring a proper balance between excitatory and inhibitory neural connections.Author summaryComputations in the brain need to be both reproducible and sensitive to changing input from the environment. It has been shown that recurrent neural networks can meet these simultaneous requirements only in a particular dynamical regime, called the edge of chaos in non-linear systems theory. Here, we demonstrate that recurrent neural networks can be easily tuned to this critical regime of optimal information processing by assuring a proper ratio of excitatory and inhibitory connections between the neurons. This result is in line with several micro-anatomical studies of the cortex, which frequently confirm that the excitatory-inhibitory balance is strictly conserved in the cortex. Furthermore, it turns out that neural dynamics is largely independent from the total density of connections, a feature that explains how the brain remains functional during periods of growth or decay. Finally, we find that the existence of too many symmetric connections is detrimental for the above mentioned critical dynamical regime, but maybe in turn useful for pattern completion tasks.


2021 ◽  
Author(s):  
Quan Wan ◽  
Jorge A. Menendez ◽  
Bradley R. Postle

How does the brain prioritize among the contents of working memory to appropriately guide behavior? Using inverted encoding modeling (IEM), previous work (Wan et al., 2020) showed that unprioritized memory items (UMI) are actively represented in the brain but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To gain insight into the mechanisms underlying the UMI-to-PMI representational transformation, we trained recurrent neural networks (RNNs) with an LSTM architecture to perform a 2-back working memory task. Visualization of the LSTM hidden layer activity using Principle Component Analysis (PCA) revealed that the UMI representation is rotationally remapped to that of PMI, and this was quantified and confirmed via demixed PCA. The application of the same analyses to the EEG dataset of Wan et al. (2020) revealed similar rotational remapping between the UMI and PMI representations. These results identify rotational remapping as a candidate neural computation employed in the dynamic prioritization within contents of working memory.


Sign in / Sign up

Export Citation Format

Share Document