scholarly journals Breeding novel solutions in the brain: A model of Darwinian neurodynamics

F1000Research ◽  
2017 ◽  
Vol 5 ◽  
pp. 2416
Author(s):  
András Szilágyi ◽  
István Zachar ◽  
Anna Fedor ◽  
Harold P. de Vladar ◽  
Eörs Szathmáry

Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.

F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 2416 ◽  
Author(s):  
András Szilágyi ◽  
István Zachar ◽  
Anna Fedor ◽  
Harold P. de Vladar ◽  
Eörs Szathmáry

Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


2021 ◽  
Author(s):  
Quan Wan ◽  
Jorge A. Menendez ◽  
Bradley R. Postle

How does the brain prioritize among the contents of working memory to appropriately guide behavior? Using inverted encoding modeling (IEM), previous work (Wan et al., 2020) showed that unprioritized memory items (UMI) are actively represented in the brain but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To gain insight into the mechanisms underlying the UMI-to-PMI representational transformation, we trained recurrent neural networks (RNNs) with an LSTM architecture to perform a 2-back working memory task. Visualization of the LSTM hidden layer activity using Principle Component Analysis (PCA) revealed that the UMI representation is rotationally remapped to that of PMI, and this was quantified and confirmed via demixed PCA. The application of the same analyses to the EEG dataset of Wan et al. (2020) revealed similar rotational remapping between the UMI and PMI representations. These results identify rotational remapping as a candidate neural computation employed in the dynamic prioritization within contents of working memory.


2006 ◽  
Vol 29 (1) ◽  
pp. 81-81
Author(s):  
Ralph-Axel Müller

Although van der Velde's de Kamps's (vdV&dK) attempt to put syntactic processing into a broader context of combinatorial cognition is promising, their coverage of neuroscientific evidence is disappointing. Neither their case against binding by temporal coherence nor their arguments against recurrent neural networks are compelling. As an alternative, vdV&dK propose a blackboard model that is based on the assumption of special processors (e.g., lexical versus grammatical), but evidence from the cognitive neuroscience of language, which is, overall, less than supportive of such special processors, is not considered. As a consequence, vdV&dK's may be a clever model of syntactic processing, but it remains unclear how much we can learn from it with regard to biologically based human language.


2012 ◽  
Vol 24 (2) ◽  
pp. 523-540 ◽  
Author(s):  
Dimitrije Marković ◽  
Claudius Gros

A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.


2020 ◽  
Vol 16 (11) ◽  
pp. e1008342
Author(s):  
Zhewei Zhang ◽  
Huzi Cheng ◽  
Tianming Yang

The brain makes flexible and adaptive responses in a complicated and ever-changing environment for an organism’s survival. To achieve this, the brain needs to understand the contingencies between its sensory inputs, actions, and rewards. This is analogous to the statistical inference that has been extensively studied in the natural language processing field, where recent developments of recurrent neural networks have found many successes. We wonder whether these neural networks, the gated recurrent unit (GRU) networks in particular, reflect how the brain solves the contingency problem. Therefore, we build a GRU network framework inspired by the statistical learning approach of NLP and test it with four exemplar behavior tasks previously used in empirical studies. The network models are trained to predict future events based on past events, both comprising sensory, action, and reward events. We show the networks can successfully reproduce animal and human behavior. The networks generalize the training, perform Bayesian inference in novel conditions, and adapt their choices when event contingencies vary. Importantly, units in the network encode task variables and exhibit activity patterns that match previous neurophysiology findings. Our results suggest that the neural network approach based on statistical sequence learning may reflect the brain’s computational principle underlying flexible and adaptive behaviors and serve as a useful approach to understand the brain.


2021 ◽  
Author(s):  
Xiangbin Teng ◽  
Ru-Yuan Zhang

Complex human behaviors involve perceiving continuous stimuli and planning actions at sequential time points, such as in perceiving/producing speech and music. To guide adaptive behavior, the brain needs to internally anticipate a sequence of prospective moments. How does the brain achieve this sequential temporal anticipation without relying on any external timing cues? To answer this question, we designed a premembering task: we tagged three temporal locations in white noise by asking human listeners to detect a tone presented at one of the temporal locations. We selectively probed the anticipating processes guided by memory in trials with only flat noise using novel modulation analyses. A multiscale anticipating scheme was revealed: the neural power modulation in the delta band encodes noise duration on a supra-second scale; the modulations in the alpha-beta band range mark the tagged temporal locations on a subsecond scale and correlate with tone detection performance. To unveil the functional role of those neural observations, we turned to recurrent neural networks (RNNs) optimized for the behavioral task. The RNN hidden dynamics resembled the neural modulations; further analyses and perturbations on RNNs suggest that the neural power modulations in the alpha/beta band emerged as a result of selectively suppressing irrelevant noise periods and increasing sensitivity to the anticipated temporal locations. Our neural, behavioral, and modelling findings convergingly demonstrate that the sequential temporal anticipation involves a process of dynamic gain control: to anticipate a few meaningful moments is also to actively ignore irrelevant events that happen most of the time.


2022 ◽  
Author(s):  
Leo Kozachkov ◽  
John Tauber ◽  
Mikael Lundqvist ◽  
Scott L Brincat ◽  
Jean-Jacques Slotine ◽  
...  

Working memory has long been thought to arise from sustained spiking/attractor dynamics. However, recent work has suggested that short-term synaptic plasticity (STSP) may help maintain attractor states over gaps in time with little or no spiking. To determine if STSP endows additional functional advantages, we trained artificial recurrent neural networks (RNNs) with and without STSP to perform an object working memory task. We found that RNNs with and without STSP were both able to maintain memories over distractors presented in the middle of the memory delay. However, RNNs with STSP showed activity that was similar to that seen in the cortex of monkeys performing the same task. By contrast, RNNs without STSP showed activity that was less brain-like. Further, RNNs with STSP were more robust to noise and network degradation than RNNs without STSP. These results show that STSP can not only help maintain working memories, it also makes neural networks more robust.


2018 ◽  
Author(s):  
Patrick Krauss ◽  
Marc Schuster ◽  
Verena Dietrich ◽  
Achim Schilling ◽  
Holger Schulze ◽  
...  

AbstractRecurrent neural networks are complex non-linear systems, capable of ongoing activity in the absence of driving inputs. The dynamical properties of these systems, in particular their long-time attractor states, are determined on the microscopic level by the connection strengths wij between the individual neurons. However, little is known to which extent network dynamics is tunable on a more coarse-grained level by the statistical features of the weight matrix. In this work, we investigate the dynamical impact of three statistical parameters: density (the fraction of non-zero connections), balance (the ratio of excitatory to inhibitory connections), and symmetry (the fraction of neuron pairs with wij = wji). By computing a ‘phase diagram’ of network dynamics, we find that balance is the essential control parameter: Its gradual increase from negative to positive values drives the system from oscillatory behavior into a chaotic regime, and eventually into stationary fix points. Only directly at the border of the chaotic regime do the neural networks display rich but regular dynamics, thus enabling actual information processing. These results suggest that the brain, too, is fine-tuned to the ‘edge of chaos’ by assuring a proper balance between excitatory and inhibitory neural connections.Author summaryComputations in the brain need to be both reproducible and sensitive to changing input from the environment. It has been shown that recurrent neural networks can meet these simultaneous requirements only in a particular dynamical regime, called the edge of chaos in non-linear systems theory. Here, we demonstrate that recurrent neural networks can be easily tuned to this critical regime of optimal information processing by assuring a proper ratio of excitatory and inhibitory connections between the neurons. This result is in line with several micro-anatomical studies of the cortex, which frequently confirm that the excitatory-inhibitory balance is strictly conserved in the cortex. Furthermore, it turns out that neural dynamics is largely independent from the total density of connections, a feature that explains how the brain remains functional during periods of growth or decay. Finally, we find that the existence of too many symmetric connections is detrimental for the above mentioned critical dynamical regime, but maybe in turn useful for pattern completion tasks.


Dialogue ◽  
1998 ◽  
Vol 37 (1) ◽  
pp. 155-162
Author(s):  
Don Ross

Paul Churchland does not open his latest book,The Engine of Reason, the Seat of the Soul, modestly. He begins by announcing, “This book is about you. And me … More broadly still, it is about every creature that ever swam, or walked, or flew over the face of the Earth” (p. 3). A few sentences later, he says, “Fortunately, recent research into neural networks … has produced the beginnings of a real understanding of how the biological brain works—a real understanding, that is, of howyouwork, and everyone else like you” (p. 3). The implicit identification here of “me and you and everyone” with “the biological brain” might lead an uncharitable reader to view Churchland's book as “Eliminativism for the non-specialist,” that is, as an attempt to popularize the view of the mindbody problem with which, among his professional peers, Churchland has long been identified. However, I think that such a readingwouldbe uncharitable. He is, of course, frequently sceptical about the utility of folk psychology, but in this book he is much less concerned to disparage folk psychology as a failedtheory(by contrast with, for example, the arguments in Churchland 1979) than to urge the more modest view that the more we understand the brain, the better we shall be at helping those whose brains are damaged in ways that interfere seriously with the fulfilment of their lives. Hence, I am inclined to take him at his word when he says in the Preface that “The book is motivated first of all by sheer excitement over the new picture that is now emerging … [and] … also by the idea that this is information that the public needs to know” (p. xi). What excites Churchland so, at least overtly, is not the negative thesis he has defended elsewhere that folk-psychological terms fail to refer; his enthusiasm is mainly reserved for the positive thesis that minds are, essentially, interacting assemblies of recurrent neural networks. It is therefore this positive thesis, and Churchland's defence of it, that I will assess in the following discussion.


Sign in / Sign up

Export Citation Format

Share Document