scholarly journals Cognition, Concurrency Theory and Reverberations in the Brain: in Search of a Calculus of Communicating (Recurrent) Neural Systems

10.29007/94w5 ◽  
2018 ◽  
Author(s):  
Howard Bowman ◽  
Li Su

We consider whether techniques from concurrency theory can be applied in the area of Cognitive Neuroscience. We focus on two potential applications. The first of these explores structural decomposition, which is effectively assumed by the localisation of function metaphor that so dominates current Cognitive Neuroscience. We take concurrency theory methods, especially Process Calculi, as canonical illustrations of system description notations that support structural decomposition and, in particular, encapsulation of behaviour. We argue that carrying these behavioural and notational properties over to the Cognitive Neuroscience setting is difficult, since neural networks (the modelling method of choice) are not naturally encapsulable. Our second application presents work on verifying stability properties of neural network learning algorithms using model checking. We thereby present evidence that a particular learning algorithm, the Generalised Recirculation algorithm, exhibits an especially severe form of instability, whereby it forgets what it has learnt, while continuing to be trained on the same pattern set.

Author(s):  
Xiayu Chen ◽  
Ming Zhou ◽  
Zhengxin Gong ◽  
Wei Xu ◽  
Xingyu Liu ◽  
...  

ABSTRACTDeep neural networks (DNNs) have attained human-level performance on dozens of challenging tasks through an end-to-end deep learning strategy. Deep learning gives rise to data representations with multiple levels of abstraction; however, it does not explicitly provide any insights into the internal operations of DNNs. Its success appeals to neuroscientists not only to apply DNNs to model biological neural systems, but also to adopt concepts and methods from cognitive neuroscience to understand the internal representations of DNNs. Although general deep learning frameworks such as PyTorch and TensorFlow could be used to allow such cross-disciplinary studies, the use of these frameworks typically requires high-level programming expertise and comprehensive mathematical knowledge. A toolbox specifically designed for cognitive neuroscientists to map DNNs and brains is urgently needed. Here, we present DNNBrain, a Python-based toolbox designed for exploring internal representations in both DNNs and the brain. By integrating DNN software packages and well-established brain imaging tools, DNNBrain provides application programming and command line interfaces for a variety of research scenarios, such as extracting DNN activation, probing DNN representations, mapping DNN representations onto the brain, and visualizing DNN representations. We expect that our toolbox will accelerate scientific research in applying DNNs to model biological neural systems and utilizing paradigms of cognitive neuroscience to unveil the black box of DNNs.


2008 ◽  
Vol 105 (46) ◽  
pp. 18053-18057 ◽  
Author(s):  
Katherine M. Nautiyal ◽  
Ana C. Ribeiro ◽  
Donald W. Pfaff ◽  
Rae Silver

Mast cells are resident in the brain and contain numerous mediators, including neurotransmitters, cytokines, and chemokines, that are released in response to a variety of natural and pharmacological triggers. The number of mast cells in the brain fluctuates with stress and various behavioral and endocrine states. These properties suggest that mast cells are poised to influence neural systems underlying behavior. Using genetic and pharmacological loss-of-function models we performed a behavioral screen for arousal responses including emotionality, locomotor, and sensory components. We found that mast cell deficient KitW−sh/W−sh (sash−/−) mice had a greater anxiety-like phenotype than WT and heterozygote littermate control animals in the open field arena and elevated plus maze. Second, we show that blockade of brain, but not peripheral, mast cell activation increased anxiety-like behavior. Taken together, the data implicate brain mast cells in the modulation of anxiety-like behavior and provide evidence for the behavioral importance of neuroimmune links.


1993 ◽  
Vol 79 (5) ◽  
pp. 729-735 ◽  
Author(s):  
David Barba ◽  
Joseph Hardin ◽  
Jasodhara Ray ◽  
Fred H. Gage

✓ Gene therapy has many potential applications in central nervous system (CNS) disorders, including the selective killing of tumor cells in the brain. A rat brain tumor model was used to test the herpes simplex virus (HSV)-thymidine kinase (TK) gene for its ability to selectively kill C6 and 9L tumor cells in the brain following systemic administration of the nucleoside analog ganciclovir. The HSV-TK gene was introduced in vitro into tumor cells (C6-TK and 9L-TK), then these modified tumor cells were evaluated for their sensitivity to cell killing by ganciclovir. In a dose-response assay, both C6-TK and 9L-TK cells were 100 times more sensitive to killing by ganciclovir (median lethal dose: C6-TK, 0.1 µg ganciclovir/ml; C6, 5.0 µg ganciclovir/ml) than unmodified wild-type tumor cells or cultured fibroblasts. In vivo studies confirmed the ability of intraperitoneal ganciclovir administration to kill established brain tumors in rats as quantified by both stereological assessment of brain tumor volumes and studies of animal survival over 90 days. Rats with brain tumors established by intracerebral injection of wild-type or HSV-TK modified tumor cells or by a combination of wild-type and HSV-TK-modified cells were studied with and without ganciclovir treatments. Stereological methods determined that ganciclovir treatment eliminated tumors composed of HSV-TK-modified cells while control tumors grew as expected (p < 0.001). In survival studies, all 10 rats with 9L-TK tumors treated with ganciclovir survived 90 days while all untreated rats died within 25 days. Curiously, tumors composed of combinations of 9L and 9L-TK cells could be eliminated by ganciclovir treatments even when only one-half of the tumor cells carried the HSV-TK gene. While not completely understood, this additional tumor cell killing appears to be both tumor selective and local in nature. It is concluded that HSV-TK gene therapy with ganciclovir treatment does selectively kill tumor cells in the brain and has many potential applications in CNS disorders, including the treatment of cancer.


2010 ◽  
Vol 22 (12) ◽  
pp. 2979-3035 ◽  
Author(s):  
Stefan Klampfl ◽  
Wolfgang Maass

Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.


2000 ◽  
Author(s):  
Magdy Mohamed Abdelhameed ◽  
Sabri Cetinkunt

Abstract Cerebellar model articulation controller (CMAC) is a useful neural network learning technique. It was developed two decades ago but yet lacks an adequate learning algorithm, especially when it is used in a hybrid- type controller. This work is intended to introduce a simulation study for examining the performance of a hybrid-type control system based on the conventional learning algorithm of CMAC neural network. This study showed that the control system is unstable. Then a new adaptive learning algorithm of a CMAC based hybrid- type controller is proposed. The main features of the proposed learning algorithm, as well as the effects of the newly introduced parameters of this algorithm have been studied extensively via simulation case studies. The simulation results showed that the proposed learning algorithm is a robust in stabilizing the control system. Also, this proposed learning algorithm preserved all the known advantages of the CMAC neural network. Part II of this work is dedicated to validate the effectiveness of the proposed CMAC learning algorithm experimentally.


1997 ◽  
Vol 84 (2) ◽  
pp. 627-661 ◽  
Author(s):  
Peter Brugger

This article updates Tune's 1964 review of variables influencing human subjects' attempts at generating random sequences of alternatives. It also covers aspects not included in the original review such as randomization behavior by patients with neurological and psychiatric disorders. Relevant work from animal research (spontaneous alternation paradigm) is considered as well. It is conjectured that Tune's explanation of sequential nonrandomness in terms of a limited capacity of short-term memory can no longer be maintained. Rather, interdependence among consecutive choices is considered a consequence of an organism's natural susceptibility to interference. Random generation is thus a complex action which demands complete suppression of any rule-governed behavior. It possibly relies on functions of the frontal lobes but cannot otherwise be “localized” to restricted regions of the brain. Possible developments in the field are briefly discussed, both with respect to basic experiments regarding the nature of behavioral nonrandomness and to potential applications of random-generation tasks.


Sign in / Sign up

Export Citation Format

Share Document