scholarly journals Astrocytic Based Controller Shifts Epileptic Activity to the Chaotic State

Author(s):  
Mojdeh Nahtani ◽  
◽  
Mahdi Siahi ◽  
Javad Razjouyan ◽  
◽  
...  

Investigating effective controller to shift hippocampal epileptic periodicity to normal chaotic behavior will be new hope for epilepsy treatment. Astrocytes nourish and protect neurons as well as maintaining synaptic transmission and network activity. Therefore, this study explores the ameliorating effect of astrocyte computational model on epileptic periodicity. Modified Morris-Lecar equations were used to model hippocampal CA3 network. Network inhibitory parameters were employed to generate oscillation induced epileptiform periodicity. The astrocyte controller was based on a functional dynamic mathematical model of brain astrocytic cells. Results demonstrated that synchronization of two neural networks shifted the brain chaotic state to periodicity. Applying astrocytic controller to the synchronized networks returned the system to the desynchronized chaotic state. It is concluded that astrocytes are probably a good model in controlling epileptic periodicity. However, more research efforts are needed to delineate the effect.

2019 ◽  
Author(s):  
Holger Finger ◽  
Richard Gast ◽  
Christian Gerloff ◽  
Andreas K. Engel ◽  
Peter König

AbstractDynamic communication and routing play important roles in the human brain to facilitate 2exibility in task solving and thought processes. Here, we present a network perturbation methodology that allows to investigate dynamic switching between different network pathways based on phase offsets between two external oscillatory drivers. We apply this method in a computational model of the human connectome with delay-coupled neural masses. To analyze dynamic switching of pathways, we define four new metrics that measure dynamic network response properties for pairs of stimulated nodes. Evaluating these metrics for all network pathways, we found a broad spectrum of pathways with distinct dynamic properties and switching behaviors. Specifically, we found that 60.1% of node pairs can switch their communication from one pathway to another depending on their phase offsets. This indicates that phase offsets and coupling delays play an important computational role for the dynamic switching between communication pathways in the brain.


2021 ◽  
pp. 1-25
Author(s):  
Yang Shen ◽  
Julia Wang ◽  
Saket Navlakha

Abstract A fundamental challenge at the interface of machine learning and neuroscience is to uncover computational principles that are shared between artificial and biological neural networks. In deep learning, normalization methods such as batch normalization, weight normalization, and their many variants help to stabilize hidden unit activity and accelerate network training, and these methods have been called one of the most important recent innovations for optimizing deep networks. In the brain, homeostatic plasticity represents a set of mechanisms that also stabilize and normalize network activity to lie within certain ranges, and these mechanisms are critical for maintaining normal brain function. In this article, we discuss parallels between artificial and biological normalization methods at four spatial scales: normalization of a single neuron's activity, normalization of synaptic weights of a neuron, normalization of a layer of neurons, and normalization of a network of neurons. We argue that both types of methods are functionally equivalent—that is, both push activation patterns of hidden units toward a homeostatic state, where all neurons are equally used—and we argue that such representations can improve coding capacity, discrimination, and regularization. As a proof of concept, we develop an algorithm, inspired by a neural normalization technique called synaptic scaling, and show that this algorithm performs competitively against existing normalization methods on several data sets. Overall, we hope this bidirectional connection will inspire neuroscientists and machine learners in three ways: to uncover new normalization algorithms based on established neurobiological principles; to help quantify the trade-offs of different homeostatic plasticity mechanisms used in the brain; and to offer insights about how stability may not hinder, but may actually promote, plasticity.


2019 ◽  
Vol 6 (10) ◽  
pp. 191086 ◽  
Author(s):  
Vibeke Devold Valderhaug ◽  
Wilhelm Robert Glomm ◽  
Eugenia Mariana Sandru ◽  
Masahiro Yasuda ◽  
Axel Sandvig ◽  
...  

In vitro electrophysiological investigation of neural activity at a network level holds tremendous potential for elucidating underlying features of brain function (and dysfunction). In standard neural network modelling systems, however, the fundamental three-dimensional (3D) character of the brain is a largely disregarded feature. This widely applied neuroscientific strategy affects several aspects of the structure–function relationships of the resulting networks, altering network connectivity and topology, ultimately reducing the translatability of the results obtained. As these model systems increase in popularity, it becomes imperative that they capture, as accurately as possible, fundamental features of neural networks in the brain, such as small-worldness. In this report, we combine in vitro neural cell culture with a biologically compatible scaffolding substrate, surface-grafted polymer particles (PPs), to develop neural networks with 3D topology. Furthermore, we investigate their electrophysiological network activity through the use of 3D multielectrode arrays. The resulting neural network activity shows emergent behaviour consistent with maturing neural networks capable of performing computations, i.e. activity patterns suggestive of both information segregation (desynchronized single spikes and local bursts) and information integration (network spikes). Importantly, we demonstrate that the resulting PP-structured neural networks show both structural and functional features consistent with small-world network topology.


2020 ◽  
Author(s):  
Yang Shen ◽  
Julia Wang ◽  
Saket Navlakha

AbstractA fundamental challenge at the interface of machine learning and neuroscience is to uncover computational principles that are shared between artificial and biological neural networks. In deep learning, normalization methods, such as batch normalization, weight normalization, and their many variants, help to stabilize hidden unit activity and accelerate network training, and these methods have been called one of the most important recent innovations for optimizing deep networks. In the brain, homeostatic plasticity represents a set of mechanisms that also stabilize and normalize network activity to lie within certain ranges, and these mechanisms are critical for maintaining normal brain function. In this survey, we discuss parallels between artificial and biological normalization methods at four spatial scales: normalization of a single neuron’s activity, normalization of synaptic weights of a neuron, normalization of a layer of neurons, and normalization of a network of neurons. We argue that both types of methods are functionally equivalent — i.e., they both push activation patterns of hidden units towards a homeostatic state, where all neurons are equally used — and that such representations can increase coding capacity, discrimination, and regularization. As a proof of concept, we develop a neural normalization algorithm, inspired by a phenomena called synaptic scaling, and show that this algorithm performs competitively against existing normalization methods on several datasets. Overall, we hope this connection will inspire machine learners in three ways: to uncover new normalization algorithms based on established neurobiological principles; to help quantify the trade-offs of different homeostatic plasticity mechanisms used in the brain; and to offer insights about how stability may not hinder, but may actually promote, plasticity.


2017 ◽  
Vol 39 (5) ◽  
pp. 859-873 ◽  
Author(s):  
Justus Schneider ◽  
Nikolaus Berndt ◽  
Ismini E Papageorgiou ◽  
Jana Maurer ◽  
Sascha Bulik ◽  
...  

Cortical information processing comprises various activity states emerging from timed synaptic excitation and inhibition. However, the underlying energy metabolism is widely unknown. We determined the cerebral metabolic rate of oxygen (CMRO2) along a tissue depth of <0.3 mm in the hippocampal CA3 region during various network activities, including gamma oscillations and sharp wave-ripples that occur during wakefulness and sleep. These physiological states associate with sensory perception and memory formation, and critically depend on perisomatic GABA inhibition. Moreover, we modelled vascular oxygen delivery based on quantitative microvasculature analysis. (1) Local CMRO2 was highest during gamma oscillations (3.4 mM/min), medium during sharp wave-ripples, asynchronous activity and isoflurane application (2.0–1.6 mM/min), and lowest during tetrodotoxin application (1.4 mM/min). (2) Energy expenditure of axonal and synaptic signaling accounted for >50% during gamma oscillations. (3) CMRO2 positively correlated with number and synchronisation of activated synapses, and neural multi-unit activity. (4) The median capillary distance was 44 µm. (5) The vascular oxygen partial pressure of 33 mmHg was needed to sustain oxidative phosphorylation during gamma oscillations. We conclude that gamma oscillations featuring high energetics require a hemodynamic response to match oxygen consumption of respiring mitochondria, and that perisomatic inhibition significantly contributes to the brain energy budget.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kosuke Takagi

AbstractEnergy constraints are a fundamental limitation of the brain, which is physically embedded in a restricted space. The collective dynamics of neurons through connections enable the brain to achieve rich functionality, but building connections and maintaining activity come at a high cost. The effects of reducing these costs can be found in the characteristic structures of the brain network. Nevertheless, the mechanism by which energy constraints affect the organization and formation of the neuronal network in the brain is unclear. Here, it is shown that a simple model based on cost minimization can reproduce structures characteristic of the brain network. With reference to the behavior of neurons in real brains, the cost function was introduced in an activity-dependent form correlating the activity cost and the wiring cost as a simple ratio. Cost reduction of this ratio resulted in strengthening connections, especially at highly activated nodes, and induced the formation of large clusters. Regarding these network features, statistical similarity was confirmed by comparison to connectome datasets from various real brains. The findings indicate that these networks share an efficient structure maintained with low costs, both for activity and for wiring. These results imply the crucial role of energy constraints in regulating the network activity and structure of the brain.


1989 ◽  
Vol 1 (3) ◽  
pp. 201-222 ◽  
Author(s):  
Adam N. Mamelak ◽  
J. Allan Hobson

Bizarreness is a cognitive feature common to REM sleep dreams, which can be easily measured. Because bizarreness is highly specific to dreaming, we propose that it is most likely brought about by changes in neuronal activity that are specific to REM sleep. At the level of the dream plot, bizarreness can be defined as either discontinuity or incongruity. In addition, the dreamer's thoughts about the plot may be logically deficient. We propose that dream bizarreness is the cognitive concomitant of two kinds of changes in neuronal dynamics during REM sleep. One is the disinhibition of forebrain networks caused by the withdrawal of the modulatory influences of norepinephrine (NE) and serotonin (5HT) in REM sleep, secondary to cessation of firing of locus coeruleus and dorsal raphe neurons. This aminergic demodulation can be mathematically modeled as a shift toward increased error at the outputs from neural networks, and these errors might be represented cognitively as incongruities and/or discontinuities. We also consider the possibility that discontinuities are the cognitive concomitant of sudden bifurcations or “jumps” in the responses of forebrain neuronal networks. These bifurcations are caused by phasic discharge of pontogeniculooccipital (PGO) neurons during REM sleep, providing a source of cholinergic modulation to the forebrain which could evoke unpredictable network responses. When phasic PGO activity stops, the resultant activity in the brain may be wholly unrelated to patterns of activity dominant before such phasic stimulation began. Mathematically such sudden shifts from one pattern of activity to a second, unrelated one is called a bifurcation. We propose that the neuronal bifurcations brought about by PGO activity might be represented cognitively as bizarre discontinuities of dream plot. We regard these proposals as preliminary attempts to model the relationship between dream cognition and REM sleep neurophysiology. This neurophysiological model of dream bizarreness may also prove useful in understanding the contributions of REM sleep to the developmental and experiential plasticity of the cerebral cortex.


2015 ◽  
Vol 370 (1668) ◽  
pp. 20140170 ◽  
Author(s):  
Riitta Hari ◽  
Lauri Parkkonen

We discuss the importance of timing in brain function: how temporal dynamics of the world has left its traces in the brain during evolution and how we can monitor the dynamics of the human brain with non-invasive measurements. Accurate timing is important for the interplay of neurons, neuronal circuitries, brain areas and human individuals. In the human brain, multiple temporal integration windows are hierarchically organized, with temporal scales ranging from microseconds to tens and hundreds of milliseconds for perceptual, motor and cognitive functions, and up to minutes, hours and even months for hormonal and mood changes. Accurate timing is impaired in several brain diseases. From the current repertoire of non-invasive brain imaging methods, only magnetoencephalography (MEG) and scalp electroencephalography (EEG) provide millisecond time-resolution; our focus in this paper is on MEG. Since the introduction of high-density whole-scalp MEG/EEG coverage in the 1990s, the instrumentation has not changed drastically; yet, novel data analyses are advancing the field rapidly by shifting the focus from the mere pinpointing of activity hotspots to seeking stimulus- or task-specific information and to characterizing functional networks. During the next decades, we can expect increased spatial resolution and accuracy of the time-resolved brain imaging and better understanding of brain function, especially its temporal constraints, with the development of novel instrumentation and finer-grained, physiologically inspired generative models of local and network activity. Merging both spatial and temporal information with increasing accuracy and carrying out recordings in naturalistic conditions, including social interaction, will bring much new information about human brain function.


Sign in / Sign up

Export Citation Format

Share Document