scholarly journals Bio-instantiated recurrent neural networks

Author(s):  
Alexandros Goulas ◽  
Fabrizio Damicelli ◽  
Claus C Hilgetag

AbstractBiological neuronal networks (BNNs) constitute a niche for inspiration and analogy making for researchers that focus on artificial neuronal networks (ANNs). Moreover, neuroscientists increasingly use ANNs as a model for the brain. However, apart from certain similarities and analogies that can be drawn between ANNs and BNNs, such networks exhibit marked differences, specifically with respect to their network topology. Here, we investigate to what extent network topology found in nature can lead to beneficial aspects in recurrent neural networks (RNNs): i) the prediction performance itself, that is, the capacity of the network to minimize the desired function at hand in test data and ii) speed of training, that is, how fast during training the network reaches its optimal performance. To this end, we examine different ways to construct RNNs that instantiate the network topology of brains of different species. We refer to such RNNs as bio-instantiated. We examine the bio-instantiated RNNs in the context of a key cognitive capacity, that is, working memory, defined as the ability to track task-relevant information as a sequence of events unfolds in time. We highlight what strategies can be used to construct RNNs with the network topology found in nature, without sacrificing prediction capacity and speed of training. Despite that we observe no enhancement of performance when compared to randomly wired RNNs, our approach demonstrates how empirical neural network data can be used for constructing RNNs, thus, facilitating further experimentation with biologically realistic networks topology.

2006 ◽  
Vol 29 (1) ◽  
pp. 81-81
Author(s):  
Ralph-Axel Müller

Although van der Velde's de Kamps's (vdV&dK) attempt to put syntactic processing into a broader context of combinatorial cognition is promising, their coverage of neuroscientific evidence is disappointing. Neither their case against binding by temporal coherence nor their arguments against recurrent neural networks are compelling. As an alternative, vdV&dK propose a blackboard model that is based on the assumption of special processors (e.g., lexical versus grammatical), but evidence from the cognitive neuroscience of language, which is, overall, less than supportive of such special processors, is not considered. As a consequence, vdV&dK's may be a clever model of syntactic processing, but it remains unclear how much we can learn from it with regard to biologically based human language.


2019 ◽  
Vol 31 (7) ◽  
pp. 1235-1270 ◽  
Author(s):  
Yong Yu ◽  
Xiaosheng Si ◽  
Changhua Hu ◽  
Jianxun Zhang

Recurrent neural networks (RNNs) have been widely adopted in research areas concerned with sequential data, such as text, audio, and video. However, RNNs consisting of sigma cells or tanh cells are unable to learn the relevant information of input data when the input gap is large. By introducing gate functions into the cell structure, the long short-term memory (LSTM) could handle the problem of long-term dependencies well. Since its introduction, almost all the exciting results based on RNNs have been achieved by the LSTM. The LSTM has become the focus of deep learning. We review the LSTM cell and its variants to explore the learning capacity of the LSTM cell. Furthermore, the LSTM networks are divided into two broad categories: LSTM-dominated networks and integrated LSTM networks. In addition, their various applications are discussed. Finally, future research directions are presented for LSTM networks.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 2416 ◽  
Author(s):  
András Szilágyi ◽  
István Zachar ◽  
Anna Fedor ◽  
Harold P. de Vladar ◽  
Eörs Szathmáry

Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


2021 ◽  
Author(s):  
Xiangbin Teng ◽  
Ru-Yuan Zhang

Complex human behaviors involve perceiving continuous stimuli and planning actions at sequential time points, such as in perceiving/producing speech and music. To guide adaptive behavior, the brain needs to internally anticipate a sequence of prospective moments. How does the brain achieve this sequential temporal anticipation without relying on any external timing cues? To answer this question, we designed a premembering task: we tagged three temporal locations in white noise by asking human listeners to detect a tone presented at one of the temporal locations. We selectively probed the anticipating processes guided by memory in trials with only flat noise using novel modulation analyses. A multiscale anticipating scheme was revealed: the neural power modulation in the delta band encodes noise duration on a supra-second scale; the modulations in the alpha-beta band range mark the tagged temporal locations on a subsecond scale and correlate with tone detection performance. To unveil the functional role of those neural observations, we turned to recurrent neural networks (RNNs) optimized for the behavioral task. The RNN hidden dynamics resembled the neural modulations; further analyses and perturbations on RNNs suggest that the neural power modulations in the alpha/beta band emerged as a result of selectively suppressing irrelevant noise periods and increasing sensitivity to the anticipated temporal locations. Our neural, behavioral, and modelling findings convergingly demonstrate that the sequential temporal anticipation involves a process of dynamic gain control: to anticipate a few meaningful moments is also to actively ignore irrelevant events that happen most of the time.


2018 ◽  
Author(s):  
Patrick Krauss ◽  
Marc Schuster ◽  
Verena Dietrich ◽  
Achim Schilling ◽  
Holger Schulze ◽  
...  

AbstractRecurrent neural networks are complex non-linear systems, capable of ongoing activity in the absence of driving inputs. The dynamical properties of these systems, in particular their long-time attractor states, are determined on the microscopic level by the connection strengths wij between the individual neurons. However, little is known to which extent network dynamics is tunable on a more coarse-grained level by the statistical features of the weight matrix. In this work, we investigate the dynamical impact of three statistical parameters: density (the fraction of non-zero connections), balance (the ratio of excitatory to inhibitory connections), and symmetry (the fraction of neuron pairs with wij = wji). By computing a ‘phase diagram’ of network dynamics, we find that balance is the essential control parameter: Its gradual increase from negative to positive values drives the system from oscillatory behavior into a chaotic regime, and eventually into stationary fix points. Only directly at the border of the chaotic regime do the neural networks display rich but regular dynamics, thus enabling actual information processing. These results suggest that the brain, too, is fine-tuned to the ‘edge of chaos’ by assuring a proper balance between excitatory and inhibitory neural connections.Author summaryComputations in the brain need to be both reproducible and sensitive to changing input from the environment. It has been shown that recurrent neural networks can meet these simultaneous requirements only in a particular dynamical regime, called the edge of chaos in non-linear systems theory. Here, we demonstrate that recurrent neural networks can be easily tuned to this critical regime of optimal information processing by assuring a proper ratio of excitatory and inhibitory connections between the neurons. This result is in line with several micro-anatomical studies of the cortex, which frequently confirm that the excitatory-inhibitory balance is strictly conserved in the cortex. Furthermore, it turns out that neural dynamics is largely independent from the total density of connections, a feature that explains how the brain remains functional during periods of growth or decay. Finally, we find that the existence of too many symmetric connections is detrimental for the above mentioned critical dynamical regime, but maybe in turn useful for pattern completion tasks.


2021 ◽  
Author(s):  
Quan Wan ◽  
Jorge A. Menendez ◽  
Bradley R. Postle

How does the brain prioritize among the contents of working memory to appropriately guide behavior? Using inverted encoding modeling (IEM), previous work (Wan et al., 2020) showed that unprioritized memory items (UMI) are actively represented in the brain but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To gain insight into the mechanisms underlying the UMI-to-PMI representational transformation, we trained recurrent neural networks (RNNs) with an LSTM architecture to perform a 2-back working memory task. Visualization of the LSTM hidden layer activity using Principle Component Analysis (PCA) revealed that the UMI representation is rotationally remapped to that of PMI, and this was quantified and confirmed via demixed PCA. The application of the same analyses to the EEG dataset of Wan et al. (2020) revealed similar rotational remapping between the UMI and PMI representations. These results identify rotational remapping as a candidate neural computation employed in the dynamic prioritization within contents of working memory.


Dialogue ◽  
1998 ◽  
Vol 37 (1) ◽  
pp. 155-162
Author(s):  
Don Ross

Paul Churchland does not open his latest book,The Engine of Reason, the Seat of the Soul, modestly. He begins by announcing, “This book is about you. And me … More broadly still, it is about every creature that ever swam, or walked, or flew over the face of the Earth” (p. 3). A few sentences later, he says, “Fortunately, recent research into neural networks … has produced the beginnings of a real understanding of how the biological brain works—a real understanding, that is, of howyouwork, and everyone else like you” (p. 3). The implicit identification here of “me and you and everyone” with “the biological brain” might lead an uncharitable reader to view Churchland's book as “Eliminativism for the non-specialist,” that is, as an attempt to popularize the view of the mindbody problem with which, among his professional peers, Churchland has long been identified. However, I think that such a readingwouldbe uncharitable. He is, of course, frequently sceptical about the utility of folk psychology, but in this book he is much less concerned to disparage folk psychology as a failedtheory(by contrast with, for example, the arguments in Churchland 1979) than to urge the more modest view that the more we understand the brain, the better we shall be at helping those whose brains are damaged in ways that interfere seriously with the fulfilment of their lives. Hence, I am inclined to take him at his word when he says in the Preface that “The book is motivated first of all by sheer excitement over the new picture that is now emerging … [and] … also by the idea that this is information that the public needs to know” (p. xi). What excites Churchland so, at least overtly, is not the negative thesis he has defended elsewhere that folk-psychological terms fail to refer; his enthusiasm is mainly reserved for the positive thesis that minds are, essentially, interacting assemblies of recurrent neural networks. It is therefore this positive thesis, and Churchland's defence of it, that I will assess in the following discussion.


2019 ◽  
Vol 6 (10) ◽  
pp. 191086 ◽  
Author(s):  
Vibeke Devold Valderhaug ◽  
Wilhelm Robert Glomm ◽  
Eugenia Mariana Sandru ◽  
Masahiro Yasuda ◽  
Axel Sandvig ◽  
...  

In vitro electrophysiological investigation of neural activity at a network level holds tremendous potential for elucidating underlying features of brain function (and dysfunction). In standard neural network modelling systems, however, the fundamental three-dimensional (3D) character of the brain is a largely disregarded feature. This widely applied neuroscientific strategy affects several aspects of the structure–function relationships of the resulting networks, altering network connectivity and topology, ultimately reducing the translatability of the results obtained. As these model systems increase in popularity, it becomes imperative that they capture, as accurately as possible, fundamental features of neural networks in the brain, such as small-worldness. In this report, we combine in vitro neural cell culture with a biologically compatible scaffolding substrate, surface-grafted polymer particles (PPs), to develop neural networks with 3D topology. Furthermore, we investigate their electrophysiological network activity through the use of 3D multielectrode arrays. The resulting neural network activity shows emergent behaviour consistent with maturing neural networks capable of performing computations, i.e. activity patterns suggestive of both information segregation (desynchronized single spikes and local bursts) and information integration (network spikes). Importantly, we demonstrate that the resulting PP-structured neural networks show both structural and functional features consistent with small-world network topology.


Sign in / Sign up

Export Citation Format

Share Document