neural computation
Recently Published Documents


TOTAL DOCUMENTS

479
(FIVE YEARS 68)

H-INDEX

42
(FIVE YEARS 5)

2021 ◽  
Author(s):  
Sergei Gepshtein ◽  
Ambarish Pawar ◽  
Sunwoo Kwon ◽  
Sergey Savelev ◽  
Thomas D Albright

The traditional view of neural computation in the cerebral cortex holds that sensory neurons are specialized, i.e., selective for certain dimensions of sensory stimuli. This view was challenged by evidence of contextual interactions between stimulus dimensions in which a neuron's response to one dimension strongly depends on other dimensions. Here we use methods of mathematical modeling, psychophysics, and electrophysiology to address shortcomings of the traditional view. Using a model of a generic cortical circuit, we begin with the simple demonstration that cortical responses are always distributed among neurons, forming characteristic waveforms, which we call neural waves. When stimulated by patterned stimuli, circuit responses arise by interference of neural waves. Resulting patterns of interference depend on interaction between stimulus dimensions. Comparison of these modeled responses with responses of biological vision makes it clear that the framework of neural wave interference provides a useful alternative to the standard concept of neural computation.


NeuroImage ◽  
2021 ◽  
pp. 118827
Author(s):  
Anne Saulin ◽  
Ulrike Horn ◽  
Martin Lotze ◽  
Jochen Kaiser ◽  
Grit Hein

2021 ◽  
Author(s):  
Vasilis Thanasoulis ◽  
Bernhard Vogginger ◽  
Johannes Partzsch ◽  
Christian Mayr
Keyword(s):  

Author(s):  
Hao Zhang ◽  
Hui Xiao ◽  
Haipeng Qu ◽  
Seok-Bum Ko
Keyword(s):  

2021 ◽  
Author(s):  
Erik J Peterson

I demonstrate theoretically that calcium waves in astrocytes can compute anything neurons can. A foundational result in neural computation was proving the firing rate model of neurons defines a universal function approximator. In this work I show a similar proof extends to a model of calcium waves in astrocytes, which I confirm in a series of computer simulations. I argue the major limit in astrocyte computation is not their ability to find approximate solutions, but their computational complexity. I suggest some initial experiments that might be used to confirm these predictions.


2021 ◽  
Author(s):  
Ronghang Hu ◽  
Jacob Andreas ◽  
Trevor Darrell ◽  
Kate Saenko
Keyword(s):  

2021 ◽  
Author(s):  
Corey J. Maley

Representation is typically taken to be importantly separate from its physical implementation. This is exemplified in Marr's three-level framework, widely cited and often adopted in neuroscience. However, the separation between representation and physical implementation is not a necessary feature of information-processing systems. In particular, when it comes to analog computational systems, Marr's representational/algorithmic level and implementational level collapse into a single level. Insofar as analog computation is a better way of understanding neural computation than other notions, Marr's three-level framework must then be amended into a two-level framework. However, far from being a problem or limitation, this sheds lights on how to understand physical media as being representational, but without a separate, medium-independent representational level.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alberto Mazzoni ◽  
Calogero M. Oddo ◽  
Giacomo Valle ◽  
Domenico Camboni ◽  
Ivo Strauss ◽  
...  
Keyword(s):  

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Sean R Bittner ◽  
Agostina Palmigiano ◽  
Alex T Piet ◽  
Chunyu A Duan ◽  
Carlos D Brody ◽  
...  

A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon -- whether behavioral or a pattern of neural activity -- and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choice of model parameters. A key step is then to identify the model parameters consistent with observed phenomena: to solve the inverse problem. In this work, we present a novel technique, emergent property inference (EPI), that brings the modern probabilistic modeling toolkit to theoretical neuroscience. When theorizing circuit models, theoreticians predominantly focus on reproducing computational properties rather than a particular dataset. Our method uses deep neural networks to learn parameter distributions with these computational properties. This methodology is introduced through a motivational example of parameter inference in the stomatogastric ganglion. EPI is then shown to allow precise control over the behavior of inferred parameters and to scale in parameter dimension better than alternative techniques. In the remainder of this work, we present novel theoretical findings in models of primary visual cortex and superior colliculus, which were gained through the examination of complex parametric structure captured by EPI. Beyond its scientific contribution, this work illustrates the variety of analyses possible once deep learning is harnessed towards solving theoretical inverse problems.


Sign in / Sign up

Export Citation Format

Share Document