theoretical neuroscience
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 10)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
pp. 1-44
Author(s):  
Joseph Marino

Abstract We present a review of predictive coding, from theoretical neuroscience, and variational autoencoders, from machine learning, identifying the common origin and mathematical framework underlying both areas. As each area is prominent within its respective field, more firmly connecting these areas could prove useful in the dialogue between neuroscience and machine learning. After reviewing each area, we discuss two possible correspondences implied by this perspective: cortical pyramidal dendrites as analogous to (nonlinear) deep networks and lateral inhibition as analogous to normalizing flows. These connections may provide new directions for further investigations in each field.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Sean R Bittner ◽  
Agostina Palmigiano ◽  
Alex T Piet ◽  
Chunyu A Duan ◽  
Carlos D Brody ◽  
...  

A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon -- whether behavioral or a pattern of neural activity -- and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choice of model parameters. A key step is then to identify the model parameters consistent with observed phenomena: to solve the inverse problem. In this work, we present a novel technique, emergent property inference (EPI), that brings the modern probabilistic modeling toolkit to theoretical neuroscience. When theorizing circuit models, theoreticians predominantly focus on reproducing computational properties rather than a particular dataset. Our method uses deep neural networks to learn parameter distributions with these computational properties. This methodology is introduced through a motivational example of parameter inference in the stomatogastric ganglion. EPI is then shown to allow precise control over the behavior of inferred parameters and to scale in parameter dimension better than alternative techniques. In the remainder of this work, we present novel theoretical findings in models of primary visual cortex and superior colliculus, which were gained through the examination of complex parametric structure captured by EPI. Beyond its scientific contribution, this work illustrates the variety of analyses possible once deep learning is harnessed towards solving theoretical inverse problems.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1330
Author(s):  
Rodrigo Cofré ◽  
Cesar Maldonado ◽  
Bruno Cessac

The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, in order to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.


2020 ◽  
Author(s):  
Kion Fallah ◽  
Adam A. Willats ◽  
Ninghao Liu ◽  
Christopher J. Rozell

AbstractSparse coding is an important method for unsupervised learning of task-independent features in theoretical neuroscience models of neural coding. While a number of algorithms exist to learn these representations from the statistics of a dataset, they largely ignore the information bottlenecks present in fiber pathways connecting cortical areas. For example, the visual pathway has many fewer neurons transmitting visual information to cortex than the number of photoreceptors. Both empirical and analytic results have recently shown that sparse representations can be learned effectively after performing dimensionality reduction with randomized linear operators, producing latent coefficients that preserve information. Unfortunately, current proposals for sparse coding in the compressed space require a centralized compression process (i.e., dense random matrix) that is biologically unrealistic due to local wiring constraints observed in neural circuits. The main contribution of this paper is to leverage recent results on structured random matrices to propose a theoretical neuroscience model of randomized projections for communication between cortical areas that is consistent with the local wiring constraints observed in neuroanatomy. We show analytically and empirically that unsupervised learning of sparse representations can be performed in the compressed space despite significant local wiring constraints in compression matrices of varying forms (corresponding to different local wiring patterns). Our analysis verifies that even with significant local wiring constraints, the learned representations remain qualitatively similar, have similar quantitative performance in both training and generalization error, and are consistent across many measures with measured macaque V1 receptive fields.


Author(s):  
Rodrigo Cofré ◽  
Cesar Maldonado ◽  
Bruno Cessac

The Thermodynamic Formalism provides a rigorous mathematical framework to study quantitative and qualitative aspects of dynamical systems. At its core there is a variational principle and corresponding, in its simplest form, to the Maximum Entropy principle, used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of scienThe Thermodynamic Formalism provides a rigorous mathematical framework to study quantitative and qualitative aspects of dynamical systems. At its core there is a variational principle and corresponding, in its simplest form, to the Maximum Entropy principle, used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science, in particular, has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.ce, in particular, has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.


2019 ◽  
Vol 29 (5) ◽  
pp. 579-600 ◽  
Author(s):  
Eric Hochstein

The study of psychological mechanisms is an interdisciplinary endeavour, requiring insights from many different domains (from electrophysiology, to psychology, to theoretical neuroscience, to computer science). In this article, I argue that philosophy plays an essential role in this interdisciplinary project, and that effective scientific study of psychological mechanisms requires that working scientists be responsible metaphysicians. This means adopting deliberate metaphysical positions when studying mechanisms that go beyond what is empirically justified regarding the nature of the phenomenon being studied, the conditions of its occurrence, and its boundaries. Such metaphysical commitments are necessary in order to set up experimental protocols, determine which variables to manipulate under experimental conditions, and which conclusions to draw from different scientific models and theories. It is important for scientists to be aware of the metaphysical commitments they adopt, since they can easily be led astray if invoked carelessly.


2019 ◽  
Vol 3 (4) ◽  
pp. 902-904
Author(s):  
Alexander Peyser ◽  
Sandra Diaz Pier ◽  
Wouter Klijn ◽  
Abigail Morrison ◽  
Jochen Triesch

Large-scale in silico experimentation depends on the generation of connectomes beyond available anatomical structure. We suggest that linking research across the fields of experimental connectomics, theoretical neuroscience, and high-performance computing can enable a new generation of models bridging the gap between biophysical detail and global function. This Focus Feature on ”Linking Experimental and Computational Connectomics” aims to bring together some examples from these domains as a step toward the development of more comprehensive generative models of multiscale connectomes.


Author(s):  
Marcelo Victor Pires de Sousa ◽  
Marucia Chacur ◽  
Daniel Oliveira Martins ◽  
Carlo Rondinoni

Sign in / Sign up

Export Citation Format

Share Document