scholarly journals Nonlinear convergence boosts information coding in circuits with parallel outputs

2019 ◽  
Author(s):  
Gabrielle J. Gutierrez ◽  
Fred Rieke ◽  
Eric T. Shea-Brown

Neural circuits are structured with layers of converging and diverging connectivity, and selectivity-inducing nonlinearities at neurons and synapses. These components have the potential to hamper an accurate encoding of the circuit inputs. Past computational studies have optimized the nonlinearities of single neurons, or connection weights in networks, to maximize encoded information, but have not grappled with the simultaneous impact of convergent circuit structure and nonlinear response functions for efficient coding. Our approach is to compare model circuits with different combinations of convergence, divergence, and nonlinear neurons to discover how interactions between these components affect coding efficiency. We find that a convergent circuit with divergent parallel pathways can encode more information with nonlinear subunits than with linear subunits, despite the compressive loss induced by the convergence and the nonlinearities when considered individually. These results show that the combination of selective nonlinearities and a convergent architecture - both elements that reduce information when acting separately - can promote efficient coding.Significance StatementComputation in neural circuits relies on a common set of motifs, including divergence of common inputs to parallel pathways, convergence of multiple inputs to a single neuron, and nonlinearities that select some signals over others. Convergence and circuit nonlinearities, considered individually, can lead to a loss of information about inputs. Past work has detailed how optimized nonlinearities and circuit weights can maximize information, but here, we show that incorporating non-invertible nonlinearities into a circuit with divergence and convergence, can enhance encoded information despite the suboptimality of these components individually. This study extends a broad literature on efficient coding to convergent circuits. Our results suggest that neural circuits may preserve more information using suboptimal components than one might expect.


2021 ◽  
Vol 118 (8) ◽  
pp. e1921882118
Author(s):  
Gabrielle J. Gutierrez ◽  
Fred Rieke ◽  
Eric T. Shea-Brown

Neural circuits are structured with layers of converging and diverging connectivity and selectivity-inducing nonlinearities at neurons and synapses. These components have the potential to hamper an accurate encoding of the circuit inputs. Past computational studies have optimized the nonlinearities of single neurons, or connection weights in networks, to maximize encoded information, but have not grappled with the simultaneous impact of convergent circuit structure and nonlinear response functions for efficient coding. Our approach is to compare model circuits with different combinations of convergence, divergence, and nonlinear neurons to discover how interactions between these components affect coding efficiency. We find that a convergent circuit with divergent parallel pathways can encode more information with nonlinear subunits than with linear subunits, despite the compressive loss induced by the convergence and the nonlinearities when considered separately.





1985 ◽  
Vol 82 (7) ◽  
pp. 3235-3264 ◽  
Author(s):  
Jeppe Olsen ◽  
Poul Jo/rgensen


2008 ◽  
Vol 44 (4) ◽  
pp. 410-424
Author(s):  
A. A. Zenin ◽  
S. V. Finjakov


2011 ◽  
Vol 50 (16) ◽  
pp. 2401 ◽  
Author(s):  
Yoichiro Hanaoka ◽  
Isao Suzuki ◽  
Takashi Sakurai


2019 ◽  
Author(s):  
Joseph Heng ◽  
Michael Woodford ◽  
Rafael Polania

AbstractThe precision of human decisions is limited by both processing noise and basing decisions on finite information. But what determines the degree of such imprecision? Here we develop an efficient coding framework for higher-level cognitive processes, in which information is represented by a finite number of discrete samples. We characterize the sampling process that maximizes perceptual accuracy or fitness under the often-adopted assumption that full adaptation to an environmental distribution is possible, and show how the optimal process differs when detailed information about the current contextual distribution is costly. We tested this theory on a numerosity discrimination task, and found that humans efficiently adapt to contextual distributions, but in the way predicted by the model in which people must economize on environmental information. Thus, understanding decision behavior requires that we account for biological restrictions on information coding, challenging the often-adopted assumption of precise prior knowledge in higher-level decision systems.



2020 ◽  
Author(s):  
Arish Alreja ◽  
Ilya Nemenman ◽  
Christopher Rozell

AbstractThe number of neurons in mammalian cortex varies by multiple orders of magnitude across different species. In contrast, the ratio of excitatory to inhibitory neurons (E:I ratio) varies in a much smaller range, from 3:1 to 9:1 and remains roughly constant for different sensory areas within a species. Despite this structure being important for understanding the function of neural circuits, the reason for this consistency is not yet understood. While recent models of vision based on the efficient coding hypothesis show that increasing the number of both excitatory and inhibitory cells improves stimulus representation, the two cannot increase simultaneously due to constraints on brain volume. In this work, we implement an efficient coding model of vision under a volume (i.e., total number of neurons) constraint while varying the E:I ratio. We show that the performance of the model is optimal at biologically observed E:I ratios under several metrics. We argue that this happens due to trade-offs between the computational accuracy and the representation capacity for natural stimuli. Further, we make experimentally testable predictions that 1) the optimal E:I ratio should be higher for species with a higher sparsity in the neural activity and 2) the character of inhibitory synaptic distributions and firing rates should change depending on E:I ratio. Our findings, which are supported by our new preliminary analyses of publicly available data, provide the first quantitative and testable hypothesis based on optimal coding models for the distribution of neural types in the mammalian sensory cortices.



eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Joseph A Heng ◽  
Michael Woodford ◽  
Rafael Polania

Human decisions are based on finite information, which makes them inherently imprecise. But what determines the degree of such imprecision? Here, we develop an efficient coding framework for higher-level cognitive processes in which information is represented by a finite number of discrete samples. We characterize the sampling process that maximizes perceptual accuracy or fitness under the often-adopted assumption that full adaptation to an environmental distribution is possible, and show how the optimal process differs when detailed information about the current contextual distribution is costly. We tested this theory on a numerosity discrimination task, and found that humans efficiently adapt to contextual distributions, but in the way predicted by the model in which people must economize on environmental information. Thus, understanding decision behavior requires that we account for biological restrictions on information coding, challenging the often-adopted assumption of precise prior knowledge in higher-level decision systems.



Sign in / Sign up

Export Citation Format

Share Document