Spatial constancy and the brain: insights from neural networks

Author(s):  
Robert L. III White ◽  
Lawrence H. Snyder
Keyword(s):  
1989 ◽  
Vol 1 (3) ◽  
pp. 201-222 ◽  
Author(s):  
Adam N. Mamelak ◽  
J. Allan Hobson

Bizarreness is a cognitive feature common to REM sleep dreams, which can be easily measured. Because bizarreness is highly specific to dreaming, we propose that it is most likely brought about by changes in neuronal activity that are specific to REM sleep. At the level of the dream plot, bizarreness can be defined as either discontinuity or incongruity. In addition, the dreamer's thoughts about the plot may be logically deficient. We propose that dream bizarreness is the cognitive concomitant of two kinds of changes in neuronal dynamics during REM sleep. One is the disinhibition of forebrain networks caused by the withdrawal of the modulatory influences of norepinephrine (NE) and serotonin (5HT) in REM sleep, secondary to cessation of firing of locus coeruleus and dorsal raphe neurons. This aminergic demodulation can be mathematically modeled as a shift toward increased error at the outputs from neural networks, and these errors might be represented cognitively as incongruities and/or discontinuities. We also consider the possibility that discontinuities are the cognitive concomitant of sudden bifurcations or “jumps” in the responses of forebrain neuronal networks. These bifurcations are caused by phasic discharge of pontogeniculooccipital (PGO) neurons during REM sleep, providing a source of cholinergic modulation to the forebrain which could evoke unpredictable network responses. When phasic PGO activity stops, the resultant activity in the brain may be wholly unrelated to patterns of activity dominant before such phasic stimulation began. Mathematically such sudden shifts from one pattern of activity to a second, unrelated one is called a bifurcation. We propose that the neuronal bifurcations brought about by PGO activity might be represented cognitively as bizarre discontinuities of dream plot. We regard these proposals as preliminary attempts to model the relationship between dream cognition and REM sleep neurophysiology. This neurophysiological model of dream bizarreness may also prove useful in understanding the contributions of REM sleep to the developmental and experiential plasticity of the cerebral cortex.


2010 ◽  
Vol 61 (2) ◽  
pp. 120-124 ◽  
Author(s):  
Ladislav Zjavka

Generalization of Patterns by Identification with Polynomial Neural Network Artificial neural networks (ANN) in general classify patterns according to their relationship, they are responding to related patterns with a similar output. Polynomial neural networks (PNN) are capable of organizing themselves in response to some features (relations) of the data. Polynomial neural network for dependence of variables identification (D-PNN) describes a functional dependence of input variables (not entire patterns). It approximates a hyper-surface of this function with multi-parametric particular polynomials forming its functional output as a generalization of input patterns. This new type of neural network is based on GMDH polynomial neural network and was designed by author. D-PNN operates in a way closer to the brain learning as the ANN does. The ANN is in principle a simplified form of the PNN, where the combinations of input variables are missing.


2021 ◽  
Author(s):  
Priska Stahel ◽  
Changing Xiao ◽  
Avital Nahmias ◽  
Lili Tian ◽  
Gary Franklin Lewis

Abstract Plasma triglyceride-rich lipoproteins (TRL), particularly atherogenic remnant lipoproteins, contribute to atherosclerotic cardiovascular disease (ASCVD). Hypertriglyceridemia may arise in part from hypersecretion of TRLs by the liver and intestine. Here we focus on the complex network of hormonal, nutritional, and neuronal interorgan communication that regulates secretion of TRLs, and provide our perspective on the relative importance of these factors. Hormones and peptides originating from the pancreas (insulin, glucagon), gut (GLP-1, GLP-2, ghrelin, CCK, peptide YY), adipose tissue (leptin, adiponectin) and brain (GLP-1) modulate TRL secretion by receptor-mediated responses and indirectly via neural networks. In addition, the gut microbiome and bile acids influence lipoprotein secretion in humans and animal models. Several nutritional factors modulate hepatic lipoprotein secretion through effects on the central nervous system. Vagal afferent signalling from the gut to the brain and efferent signals from the brain to the liver and gut are modulated by hormonal and nutritional factors to influence TRL secretion. Some of these factors have been extensively studied and shown to have robust regulatory effects whereas others are ‘emerging’ regulators, whose significance remains to be determined. The quantitative importance of these factors relative to one another and relative to the key regulatory role of lipid availability remains largely unknown. Our understanding of the complex interorgan regulation of TRL secretion is rapidly evolving to appreciate the extensive hormonal, nutritional and neural signals emanating not only from gut and liver but also from the brain, pancreas, and adipose tissue.


2006 ◽  
Vol 6 ◽  
pp. 992-997 ◽  
Author(s):  
Alison M. Kerr

More than 20 years of clinical and research experience with affected people in the British Isles has provided insight into particular challenges for therapists, educators, or parents wishing to facilitate learning and to support the development of skills in people with Rett syndrome. This paper considers the challenges in two groups: those due to constraints imposed by the disabilities associated with the disorder and those stemming from the opportunities, often masked by the disorder, allowing the development of skills that depend on less-affected areas of the brain. Because the disorder interferes with the synaptic links between neurones, the functions of the brain that are most dependent on complex neural networks are the most profoundly affected. These functions include speech, memory, learning, generation of ideas, and the planning of fine movements, especially those of the hands. In contrast, spontaneous emotional and hormonal responses appear relatively intact. Whereas failure to appreciate the physical limitations of the disease leads to frustration for therapist and client alike, a clear understanding of the better-preserved areas of competence offers avenues for real progress in learning, the building of satisfying relationships, and achievement of a quality of life.


2020 ◽  
Author(s):  
Soma Nonaka ◽  
Kei Majima ◽  
Shuntaro C. Aoki ◽  
Yukiyasu Kamitani

SummaryAchievement of human-level image recognition by deep neural networks (DNNs) has spurred interest in whether and how DNNs are brain-like. Both DNNs and the visual cortex perform hierarchical processing, and correspondence has been shown between hierarchical visual areas and DNN layers in representing visual features. Here, we propose the brain hierarchy (BH) score as a metric to quantify the degree of hierarchical correspondence based on the decoding of individual DNN unit activations from human brain activity. We find that BH scores for 29 pretrained DNNs with varying architectures are negatively correlated with image recognition performance, indicating that recently developed high-performance DNNs are not necessarily brain-like. Experimental manipulations of DNN models suggest that relatively simple feedforward architecture with broad spatial integration is critical to brain-like hierarchy. Our method provides new ways for designing DNNs and understanding the brain in consideration of their representational homology.


2020 ◽  
Author(s):  
Daniele Grattarola ◽  
Lorenzo Livi ◽  
Cesare Alippi ◽  
Richard Wennberg ◽  
Taufik Valiante

Abstract Graph neural networks (GNNs) and the attention mechanism are two of the most significant advances in artificial intelligence methods over the past few years. The former are neural networks able to process graph-structured data, while the latter learns to selectively focus on those parts of the input that are more relevant for the task at hand. In this paper, we propose a methodology for seizure localisation which combines the two approaches. Our method is composed of several blocks. First, we represent brain states in a compact way by computing functional networks from intracranial electroencephalography recordings, using metrics to quantify the coupling between the activity of different brain areas. Then, we train a GNN to correctly distinguish between functional networks associated with interictal and ictal phases. The GNN is equipped with an attention-based layer which automatically learns to identify those regions of the brain (associated with individual electrodes) that are most important for a correct classification. The localisation of these regions is fully unsupervised, meaning that it does not use any prior information regarding the seizure onset zone. We report results both for human patients and for simulators of brain activity. We show that the regions of interest identified by the GNN strongly correlate with the localisation of the seizure onset zone reported by electroencephalographers. We also show that our GNN exhibits uncertainty on those patients for which the clinical localisation was also unsuccessful, highlighting the robustness of the proposed approach.


2013 ◽  
Vol 7 (1) ◽  
pp. 49-62 ◽  
Author(s):  
Vijaykumar Sutariya ◽  
Anastasia Groshev ◽  
Prabodh Sadana ◽  
Deepak Bhatia ◽  
Yashwant Pathak

Artificial neural networks (ANNs) technology models the pattern recognition capabilities of the neural networks of the brain. Similarly to a single neuron in the brain, artificial neuron unit receives inputs from many external sources, processes them, and makes decisions. Interestingly, ANN simulates the biological nervous system and draws on analogues of adaptive biological neurons. ANNs do not require rigidly structured experimental designs and can map functions using historical or incomplete data, which makes them a powerful tool for simulation of various non-linear systems.ANNs have many applications in various fields, including engineering, psychology, medicinal chemistry and pharmaceutical research. Because of their capacity for making predictions, pattern recognition, and modeling, ANNs have been very useful in many aspects of pharmaceutical research including modeling of the brain neural network, analytical data analysis, drug modeling, protein structure and function, dosage optimization and manufacturing, pharmacokinetics and pharmacodynamics modeling, and in vitro in vivo correlations. This review discusses the applications of ANNs in drug delivery and pharmacological research.


2018 ◽  
Vol 17 (3-4) ◽  
pp. 391-411 ◽  
Author(s):  
Elham Askari ◽  
Seyed Kamaledin Setarehdan ◽  
Ali Sheikhani ◽  
Mohammad Reza Mohammadi ◽  
Mohammad Teshnehlab

Sign in / Sign up

Export Citation Format

Share Document