scholarly journals Robust parallel decision-making in neural circuits with nonlinear inhibition

2020 ◽  
Vol 117 (41) ◽  
pp. 25505-25516
Author(s):  
Birgit Kriener ◽  
Rishidev Chaudhuri ◽  
Ila R. Fiete

An elemental computation in the brain is to identify the best in a set of options and report its value. It is required for inference, decision-making, optimization, action selection, consensus, and foraging. Neural computing is considered powerful because of its parallelism; however, it is unclear whether neurons can perform this max-finding operation in a way that improves upon the prohibitively slow optimal serial max-finding computation (which takes∼N⁡log(N)time for N noisy candidate options) by a factor of N, the benchmark for parallel computation. Biologically plausible architectures for this task are winner-take-all (WTA) networks, where individual neurons inhibit each other so only those with the largest input remain active. We show that conventional WTA networks fail the parallelism benchmark and, worse, in the presence of noise, altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or rescaling as N varies, the nWTA network achieves the parallelism benchmark. The network reproduces experimentally observed phenomena like Hick’s law without needing an additional readout stage or adaptive N-dependent thresholds. Our work bridges scales by linking cellular nonlinearities to circuit-level decision-making, establishes that distributed computation saturating the parallelism benchmark is possible in networks of noisy, finite-memory neurons, and shows that Hick’s law may be a symptom of near-optimal parallel decision-making with noisy input.

2017 ◽  
Author(s):  
Birgit Kriener ◽  
Rishidev Chaudhuri ◽  
Ila R. Fiete

Identifying the maximal element (max,argmax) in a set is a core computational element in inference, decision making, optimization, action selection, consensus, and foraging. Running sequentially through a list of N fluctuating items takes N log(N) time to accurately find the max, prohibitively slow for large N. The power of computation in the brain is ascribed in part to its parallelism, yet it is theoretically unclear whether leaky and noisy neurons can perform a distributed computation that cuts the required time of a serial computation by a factor of N, a benchmark for parallel computation. We show that conventional winner-take-all neural networks fail the parallelism benchmark and in the presence of noise altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or re-scaling as the number of options N varies, the nWTA network converges N times faster than the serial strategy at equal accuracy, saturating the parallelism benchmark. The nWTA network self-adjusts integration time with task difficulty to maintain fixed accuracy without parameter change. Finally, the circuit generically exhibits Hick's law for decision speed. Our work establishes that distributed computation that saturates the parallelism benchmark is possible in networks of noisy, finite-memory neurons.


Author(s):  
Genís Prat-Ortega ◽  
Klaus Wimmer ◽  
Alex Roxin ◽  
Jaime de la Rocha

AbstractPerceptual decisions require the brain to make categorical choices based on accumulated sensory evidence. The underlying computations have been studied using either phenomenological drift diffusion models or neurobiological network models exhibiting winner-take-all attractor dynamics. Although both classes of models can account for a large body of experimental data, it remains unclear to what extent their dynamics are qualitatively equivalent. Here we show that, unlike the drift diffusion model, the attractor model can operate in different integration regimes: an increase in the stimulus fluctuations or the stimulus duration promotes transitions between decision-states leading to a crossover between weighting mostly early evidence (primacy regime) to weighting late evidence (recency regime). Between these two limiting cases, we found a novel regime, which we name flexible categorization, in which fluctuations are strong enough to reverse initial categorizations, but only if they are incorrect. This asymmetry in the reversing probability results in a non-monotonic psychometric curve, a novel and distinctive feature of the attractor model. Finally, we show psychophysical evidence for the crossover between integration regimes predicted by the attractor model and for the relevance of this new regime. Our findings point to correcting transitions as an important yet overlooked feature of perceptual decision making.


2014 ◽  
Vol 26 (9) ◽  
pp. 1973-2004 ◽  
Author(s):  
Hesham Mostafa ◽  
Giacomo Indiveri

Understanding the sequence generation and learning mechanisms used by recurrent neural networks in the nervous system is an important problem that has been studied extensively. However, most of the models proposed in the literature are either not compatible with neuroanatomy and neurophysiology experimental findings, or are not robust to noise and rely on fine tuning of the parameters. In this work, we propose a novel model of sequence learning and generation that is based on the interactions among multiple asymmetrically coupled winner-take-all (WTA) circuits. The network architecture is consistent with mammalian cortical connectivity data and uses realistic neuronal and synaptic dynamics that give rise to noise-robust patterns of sequential activity. The novel aspect of the network we propose lies in its ability to produce robust patterns of sequential activity that can be halted, resumed, and readily modulated by external input, and in its ability to make use of realistic plastic synapses to learn and reproduce the arbitrary input-imposed sequential patterns. Sequential activity takes the form of a single activity bump that stably propagates through multiple WTA circuits along one of a number of possible paths. Because the network can be configured to either generate spontaneous sequences or wait for external inputs to trigger a transition in the sequence, it provides the basis for creating state-dependent perception-action loops. We first analyze a rate-based approximation of the proposed spiking network to highlight the relevant features of the network dynamics and then show numerical simulation results with spiking neurons, realistic conductance-based synapses, and spike-timing dependent plasticity (STDP) rules to validate the rate-based model.


2020 ◽  
pp. 1-10
Author(s):  
Reza Shadmehr ◽  
Alaa A. Ahmed

Abstract Why do we run toward people we love, but only walk toward others? Why do people in New York seem to walk faster than other cities? Why do our eyes linger longer on things we value more? There is a link between how the brain assigns value to things, and how it controls our movements. This link is an ancient one, developed through shared neural circuits that on one hand teach us how to value things, and on the other hand control the vigor with which we move. As a result, when there is damage to systems that signal reward, like dopamine and serotonin, that damage not only affects our mood and patterns of decision making, but how we move. In this book, we first ask why in principle evolution should have developed a shared system of control between valuation and vigor. We then focus on the neural basis of vigor, synthesizing results from experiments that have measured activity in various brain structures and neuromodulators, during tasks in which animals decide how patiently they should wait for reward, and how vigorously they should move to acquire it. Thus, the way we move unmasks one of our well-guarded secrets: how much we value the thing we are moving toward.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Yang Xie ◽  
Chechang Nie ◽  
Tianming Yang

During value-based decision making, we often evaluate the value of each option sequentially by shifting our attention, even when the options are presented simultaneously. The orbitofrontal cortex (OFC) has been suggested to encode value during value-based decision making. Yet it is not known how its activity is modulated by attention shifts. We investigated this question by employing a passive viewing task that allowed us to disentangle effects of attention, value, choice and eye movement. We found that the attention modulated OFC activity through a winner-take-all mechanism. When we attracted the monkeys’ attention covertly, the OFC neuronal activity reflected the reward value of the newly attended cue. The shift of attention could be explained by a normalization model. Our results strongly argue for the hypothesis that the OFC neuronal activity represents the value of the attended item. They provide important insights toward understanding the OFC’s role in value-based decision making.


2009 ◽  
Vol 102 (1) ◽  
pp. 1-6 ◽  
Author(s):  
Kenji Morita

On the basis of accumulating behavioral and neural evidences, it has recently been proposed that the brain neural circuits of humans and animals are equipped with several specific properties, which ensure that perceptual decision making implemented by the circuits can be nearly optimal in terms of Bayesian inference. Here, I introduce the basic ideas of such a proposal and discuss its implications from the standpoint of biophysical modeling developed in the framework of dynamical systems.


2016 ◽  
Vol 2016 ◽  
pp. 1-16
Author(s):  
Sheena Sharma ◽  
Priti Gupta ◽  
C. M. Markan

Stereopsis or depth perception is a critical aspect of information processing in the brain and is computed from the positional shift or disparity between the images seen by the two eyes. Various algorithms and their hardware implementation that compute disparity in real time have been proposed; however, most of them compute disparity through complex mathematical calculations that are difficult to realize in hardware and are biologically unrealistic. The brain presumably uses simpler methods to extract depth information from the environment and hence newer methodologies that could perform stereopsis with brain like elegance need to be explored. This paper proposes an innovative aVLSI design that leverages the columnar organization of ocular dominance in the brain and uses time-staggered Winner Take All (ts-WTA) to adaptively create disparity tuned cells. Physiological findings support the presence of disparity cells in the visual cortex and show that these cells surface as a result of binocular stimulation received after birth. Therefore, creating in hardware cells that can learn different disparities with experience not only is novel but also is biologically more realistic. These disparity cells, when allowed to interact diffusively on a larger scale, can be used to adaptively create stable topological disparity maps in silicon.


2021 ◽  
Vol 15 ◽  
Author(s):  
Qiang Xu ◽  
Qirui Zhang ◽  
Gaoping Liu ◽  
Xi-jian Dai ◽  
Xinyu Xie ◽  
...  

Brain structural covariance network (SCN) can delineate the brain synchronized alterations in a long-range time period. It has been used in the research of cognition or neuropsychiatric disorders. Recently, causal analysis of structural covariance network (CaSCN), winner-take-all and cortex–subcortex covariance network (WTA-CSSCN), and modulation analysis of structural covariance network (MOD-SCN) have expended the technology breadth of SCN. However, the lack of user-friendly software limited the further application of SCN for the research. In this work, we developed the graphical user interface (GUI) toolkit of brain structural covariance connectivity based on MATLAB platform. The software contained the analysis of SCN, CaSCN, MOD-SCN, and WTA-CSSCN. Also, the group comparison and result-showing modules were included in the software. Furthermore, a simple showing of demo dataset was presented in the work. We hope that the toolkit could help the researchers, especially clinical researchers, to do the brain covariance connectivity analysis in further work more easily.


2017 ◽  
Vol 29 (2) ◽  
pp. 368-393 ◽  
Author(s):  
Nils Kurzawa ◽  
Christopher Summerfield ◽  
Rafal Bogacz

Much experimental evidence suggests that during decision making, neural circuits accumulate evidence supporting alternative options. A computational model well describing this accumulation for choices between two options assumes that the brain integrates the log ratios of the likelihoods of the sensory inputs given the two options. Several models have been proposed for how neural circuits can learn these log-likelihood ratios from experience, but all of these models introduced novel and specially dedicated synaptic plasticity rules. Here we show that for a certain wide class of tasks, the log-likelihood ratios are approximately linearly proportional to the expected rewards for selecting actions. Therefore, a simple model based on standard reinforcement learning rules is able to estimate the log-likelihood ratios from experience and on each trial accumulate the log-likelihood ratios associated with presented stimuli while selecting an action. The simulations of the model replicate experimental data on both behavior and neural activity in tasks requiring accumulation of probabilistic cues. Our results suggest that there is no need for the brain to support dedicated plasticity rules, as the standard mechanisms proposed to describe reinforcement learning can enable the neural circuits to perform efficient probabilistic inference.


Sign in / Sign up

Export Citation Format

Share Document