scholarly journals A Correspondence between Normalization Strategies in Artificial and Biological Neural Networks

2021 ◽  
pp. 1-25
Author(s):  
Yang Shen ◽  
Julia Wang ◽  
Saket Navlakha

Abstract A fundamental challenge at the interface of machine learning and neuroscience is to uncover computational principles that are shared between artificial and biological neural networks. In deep learning, normalization methods such as batch normalization, weight normalization, and their many variants help to stabilize hidden unit activity and accelerate network training, and these methods have been called one of the most important recent innovations for optimizing deep networks. In the brain, homeostatic plasticity represents a set of mechanisms that also stabilize and normalize network activity to lie within certain ranges, and these mechanisms are critical for maintaining normal brain function. In this article, we discuss parallels between artificial and biological normalization methods at four spatial scales: normalization of a single neuron's activity, normalization of synaptic weights of a neuron, normalization of a layer of neurons, and normalization of a network of neurons. We argue that both types of methods are functionally equivalent—that is, both push activation patterns of hidden units toward a homeostatic state, where all neurons are equally used—and we argue that such representations can improve coding capacity, discrimination, and regularization. As a proof of concept, we develop an algorithm, inspired by a neural normalization technique called synaptic scaling, and show that this algorithm performs competitively against existing normalization methods on several data sets. Overall, we hope this bidirectional connection will inspire neuroscientists and machine learners in three ways: to uncover new normalization algorithms based on established neurobiological principles; to help quantify the trade-offs of different homeostatic plasticity mechanisms used in the brain; and to offer insights about how stability may not hinder, but may actually promote, plasticity.

2020 ◽  
Author(s):  
Yang Shen ◽  
Julia Wang ◽  
Saket Navlakha

AbstractA fundamental challenge at the interface of machine learning and neuroscience is to uncover computational principles that are shared between artificial and biological neural networks. In deep learning, normalization methods, such as batch normalization, weight normalization, and their many variants, help to stabilize hidden unit activity and accelerate network training, and these methods have been called one of the most important recent innovations for optimizing deep networks. In the brain, homeostatic plasticity represents a set of mechanisms that also stabilize and normalize network activity to lie within certain ranges, and these mechanisms are critical for maintaining normal brain function. In this survey, we discuss parallels between artificial and biological normalization methods at four spatial scales: normalization of a single neuron’s activity, normalization of synaptic weights of a neuron, normalization of a layer of neurons, and normalization of a network of neurons. We argue that both types of methods are functionally equivalent — i.e., they both push activation patterns of hidden units towards a homeostatic state, where all neurons are equally used — and that such representations can increase coding capacity, discrimination, and regularization. As a proof of concept, we develop a neural normalization algorithm, inspired by a phenomena called synaptic scaling, and show that this algorithm performs competitively against existing normalization methods on several datasets. Overall, we hope this connection will inspire machine learners in three ways: to uncover new normalization algorithms based on established neurobiological principles; to help quantify the trade-offs of different homeostatic plasticity mechanisms used in the brain; and to offer insights about how stability may not hinder, but may actually promote, plasticity.


2021 ◽  
Vol 14 ◽  
Author(s):  
Hyojin Bae ◽  
Sang Jeong Kim ◽  
Chang-Eop Kim

One of the central goals in systems neuroscience is to understand how information is encoded in the brain, and the standard approach is to identify the relation between a stimulus and a neural response. However, the feature of a stimulus is typically defined by the researcher's hypothesis, which may cause biases in the research conclusion. To demonstrate potential biases, we simulate four likely scenarios using deep neural networks trained on the image classification dataset CIFAR-10 and demonstrate the possibility of selecting suboptimal/irrelevant features or overestimating the network feature representation/noise correlation. Additionally, we present studies investigating neural coding principles in biological neural networks to which our points can be applied. This study aims to not only highlight the importance of careful assumptions and interpretations regarding the neural response to stimulus features but also suggest that the comparative study between deep and biological neural networks from the perspective of machine learning can be an effective strategy for understanding the coding principles of the brain.


2013 ◽  
Vol 25 (11) ◽  
pp. 2815-2832 ◽  
Author(s):  
Mathieu N. Galtier ◽  
Gilles Wainrib

Identifying, formalizing, and combining biological mechanisms that implement known brain functions, such as prediction, is a main aspect of research in theoretical neuroscience. In this letter, the mechanisms of spike-timing-dependent plasticity and homeostatic plasticity, combined in an original mathematical formalism, are shown to shape recurrent neural networks into predictors. Following a rigorous mathematical treatment, we prove that they implement the online gradient descent of a distance between the network activity and its stimuli. The convergence to an equilibrium, where the network can spontaneously reproduce or predict its stimuli, does not suffer from bifurcation issues usually encountered in learning in recurrent neural networks.


2020 ◽  
Author(s):  
Katharina Anna Wilmes ◽  
Claudia Clopath

With Hebbian learning 'who fires together wires together', well-known problems arise. On the one hand, plasticity can lead to unstable network dynamics, manifesting as run-away activity or silence. On the other hand, plasticity can erase or overwrite stored memories. Unstable dynamics can partly be addressed with homeostatic plasticity mechanisms. Unfortunately, the time constants of homeostatic mechanisms required in network models are much shorter than what has been measured experimentally. Here, we propose that homeostatic time constants can be slow if plasticity is gated. We investigate how the gating of plasticity influences the stability of network activity and stored memories. We use plastic balanced spiking neural networks consisting of excitatory neurons with a somatic and a dendritic compartment (which resemble cortical pyramidal cells in their firing properties), and inhibitory neurons targeting those compartments. We compare how different factors such as excitability, learning rate, and inhibition can lift the requirements for the critical time constant of homeostatic plasticity. We specifically investigate how gating of dendritic versus somatic plasticity allows for different amounts of weight changes in networks with the same critical homeostatic time constant. We suggest that the striking compartmentalisation of pyramidal cells and their inhibitory inputs enable large synaptic changes at the dendrite while maintaining network stability. We additionally show that spatially restricted plasticity in a subpopulation of the network improves stability. Finally, we compare how the different gates affect the stability of memories in the network.


2019 ◽  
Vol 6 (10) ◽  
pp. 191086 ◽  
Author(s):  
Vibeke Devold Valderhaug ◽  
Wilhelm Robert Glomm ◽  
Eugenia Mariana Sandru ◽  
Masahiro Yasuda ◽  
Axel Sandvig ◽  
...  

In vitro electrophysiological investigation of neural activity at a network level holds tremendous potential for elucidating underlying features of brain function (and dysfunction). In standard neural network modelling systems, however, the fundamental three-dimensional (3D) character of the brain is a largely disregarded feature. This widely applied neuroscientific strategy affects several aspects of the structure–function relationships of the resulting networks, altering network connectivity and topology, ultimately reducing the translatability of the results obtained. As these model systems increase in popularity, it becomes imperative that they capture, as accurately as possible, fundamental features of neural networks in the brain, such as small-worldness. In this report, we combine in vitro neural cell culture with a biologically compatible scaffolding substrate, surface-grafted polymer particles (PPs), to develop neural networks with 3D topology. Furthermore, we investigate their electrophysiological network activity through the use of 3D multielectrode arrays. The resulting neural network activity shows emergent behaviour consistent with maturing neural networks capable of performing computations, i.e. activity patterns suggestive of both information segregation (desynchronized single spikes and local bursts) and information integration (network spikes). Importantly, we demonstrate that the resulting PP-structured neural networks show both structural and functional features consistent with small-world network topology.


2012 ◽  
Vol 24 (6) ◽  
pp. 1487-1518 ◽  
Author(s):  
Lakshminarayan V. Chinta ◽  
Douglas B. Tweed

Many neural control systems are at least roughly optimized, but how is optimal control learned? There are algorithms for this purpose, but in their current forms, they are not suited for biological neural networks because they rely on a type of communication that is not available in the brain, namely, weight transport—transmitting the strengths, or “weights,” of individual synapses to other synapses and neurons. Here we show how optimal control can be learned without weight transport. Our method involves a set of simple mechanisms that can compensate for the absence of weight transport in the brain and so may be useful for neural computation generally.


Author(s):  
Mojdeh Nahtani ◽  
◽  
Mahdi Siahi ◽  
Javad Razjouyan ◽  
◽  
...  

Investigating effective controller to shift hippocampal epileptic periodicity to normal chaotic behavior will be new hope for epilepsy treatment. Astrocytes nourish and protect neurons as well as maintaining synaptic transmission and network activity. Therefore, this study explores the ameliorating effect of astrocyte computational model on epileptic periodicity. Modified Morris-Lecar equations were used to model hippocampal CA3 network. Network inhibitory parameters were employed to generate oscillation induced epileptiform periodicity. The astrocyte controller was based on a functional dynamic mathematical model of brain astrocytic cells. Results demonstrated that synchronization of two neural networks shifted the brain chaotic state to periodicity. Applying astrocytic controller to the synchronized networks returned the system to the desynchronized chaotic state. It is concluded that astrocytes are probably a good model in controlling epileptic periodicity. However, more research efforts are needed to delineate the effect.


2019 ◽  
Author(s):  
Anthony M. Zador

ABSTRACTOver the last decade, artificial neural networks (ANNs), have undergone a revolution, catalyzed in large part by better tools for supervised learning. However, training such networks requires enormous data sets of labeled examples, whereas young animals (including humans) typically learn with few or no labeled examples. This stark contrast with biological learning has led many in the ANN community posit that instead of supervised paradigms, animals must rely instead primarily on unsupervised learning, leading the search for better unsupervised algorithms. Here we argue that much of an animal’s behavioral repertoire is not the result of clever learning algorithms—supervised or unsupervised—but arises instead from behavior programs already present at birth. These programs arise through evolution, are encoded in the genome, and emerge as a consequence of wiring up the brain. Specifically, animals are born with highly structured brain connectivity, which enables them learn very rapidly. Recognizing the importance of the highly structured connectivity suggests a path toward building ANNs capable of rapid learning.


2016 ◽  
Vol 116 (5) ◽  
pp. 2093-2104 ◽  
Author(s):  
Christopher M. Filley ◽  
R. Douglas Fields

Whereas the cerebral cortex has long been regarded by neuroscientists as the major locus of cognitive function, the white matter of the brain is increasingly recognized as equally critical for cognition. White matter comprises half of the brain, has expanded more than gray matter in evolution, and forms an indispensable component of distributed neural networks that subserve neurobehavioral operations. White matter tracts mediate the essential connectivity by which human behavior is organized, working in concert with gray matter to enable the extraordinary repertoire of human cognitive capacities. In this review, we present evidence from behavioral neurology that white matter lesions regularly disturb cognition, consider the role of white matter in the physiology of distributed neural networks, develop the hypothesis that white matter dysfunction is relevant to neurodegenerative disorders, including Alzheimer's disease and the newly described entity chronic traumatic encephalopathy, and discuss emerging concepts regarding the prevention and treatment of cognitive dysfunction associated with white matter disorders. Investigation of the role of white matter in cognition has yielded many valuable insights and promises to expand understanding of normal brain structure and function, improve the treatment of many neurobehavioral disorders, and disclose new opportunities for research on many challenging problems facing medicine and society.


Sign in / Sign up

Export Citation Format

Share Document