scholarly journals Single Cortical Neurons as Deep Artificial Neural Networks

2019 ◽  
Author(s):  
David Beniaguev ◽  
Idan Segev ◽  
Michael London

AbstractWe introduce a novel approach to study neurons as sophisticated I/O information processing units by utilizing recent advances in the field of machine learning. We trained deep neural networks (DNNs) to mimic the I/O behavior of a detailed nonlinear model of a layer 5 cortical pyramidal cell, receiving rich spatio-temporal patterns of input synapse activations. A Temporally Convolutional DNN (TCN) with seven layers was required to accurately, and very efficiently, capture the I/O of this neuron at the millisecond resolution. This complexity primarily arises from local NMDA-based nonlinear dendritic conductances. The weight matrices of the DNN provide new insights into the I/O function of cortical pyramidal neurons, and the approach presented can provide a systematic characterization of the functional complexity of different neuron types. Our results demonstrate that cortical neurons can be conceptualized as multi-layered “deep” processing units, implying that the cortical networks they form have a non-classical architecture and are potentially more computationally powerful than previously assumed.

2017 ◽  
Author(s):  
Emily K. Stephens ◽  
Arielle L. Baker ◽  
Allan T. Gulledge

AbstractSerotonin (5-HT) selectively excites subpopulations of pyramidal neurons in the neocortex via activation of 5-HT2A (2A) receptors coupled to Gq subtype G-protein alpha subunits. Gq-mediated excitatory responses have been attributed primarily to suppression of potassium conductances, including those mediated by KV7 potassium channels (i.e., the M-current), or activation of nonspecific cation conductances that underly calcium-dependent afterdepolarizations (ADPs). However, 2A-dependent excitation of cortical neurons has not been extensively studied, and no consensus exists regarding the underlying ionic effector(s) involved. We tested potential mechanisms of serotonergic excitation in commissural/callosal projection neurons (COM neurons) in layer 5 of the mouse medial prefrontal cortex, a subpopulation of cortical pyramidal neurons that exhibit 2A-dependent excitation in response to 5-HT. In baseline conditions, 5-HT enhanced the rate of action potential generation in COM neurons experiencing suprathreshold somatic current injection. This serotonergic excitation was occluded by activation of muscarinic acetylcholine (ACh) receptors, confirming that 5-HT acts via the same Gq-signaling cascades engaged by ACh. Like ACh, 5-HT promoted the generation of calcium-dependent ADPs following spike trains. However, calcium was not necessary for serotonergic excitation, as responses to 5-HT were enhanced (by >100%), rather than reduced, by chelation of intracellular calcium with 10 mM BAPTA. This suggests intracellular calcium negatively regulates additional ionic conductances contributing to 2A excitation. Removal of extracellular calcium had no effect when intracellular calcium signaling was intact, but suppressed 5-HT response amplitudes, by about 50% (i.e., back to normal baseline values) when BAPTA was included in patch pipettes. This suggests that 2A excitation involves activation of a nonspecific cation conductance that is both calcium-sensitive and calcium-permeable. M-current suppression was found to be a third ionic effector, as blockade of KV7 channels with XE991 (10 μM) reduced serotonergic excitation by ∼50% in control conditions, and by ∼30% with intracellular BAPTA present. These findings demonstrate a role for at least three distinct ionic effectors, including KV7 channels, a calcium-sensitive and calcium-permeable nonspecific cation conductance, and the calcium-dependent ADP conductance, in mediating serotonergic excitation of COM neurons.


2021 ◽  
pp. 1-60
Author(s):  
Khashayar Filom ◽  
Roozbeh Farhoodi ◽  
Konrad Paul Kording

Abstract Neural networks are versatile tools for computation, having the ability to approximate a broad range of functions. An important problem in the theory of deep neural networks is expressivity; that is, we want to understand the functions that are computable by a given network. We study real, infinitely differentiable (smooth) hierarchical functions implemented by feedforward neural networks via composing simpler functions in two cases: (1) each constituent function of the composition has fewer in puts than the resulting function and (2) constituent functions are in the more specific yet prevalent form of a nonlinear univariate function (e.g., tanh) applied to a linear multivariate function. We establish that in each of these regimes, there exist nontrivial algebraic partial differential equations (PDEs) that are satisfied by the computed functions. These PDEs are purely in terms of the partial derivatives and are dependent only on the topology of the network. Conversely, we conjecture that such PDE constraints, once accompanied by appropriate nonsingularity conditions and perhaps certain inequalities involving partial derivatives, guarantee that the smooth function under consideration can be represented by the network. The conjecture is verified in numerous examples, including the case of tree architectures, which are of neuroscientific interest. Our approach is a step toward formulating an algebraic description of functional spaces associated with specific neural networks, and may provide useful new tools for constructing neural networks.


2018 ◽  
Author(s):  
Toviah Moldwin ◽  
Idan Segev

AbstractThe perceptron learning algorithm and its multiple-layer extension, the backpropagation algorithm, are the foundations of the present-day machine learning revolution. However, these algorithms utilize a highly simplified mathematical abstraction of a neuron; it is not clear to what extent real biophysical neurons with morphologically-extended nonlinear dendritic trees and conductance-based synapses could realize perceptron-like learning. Here we implemented the perceptron learning algorithm in a realistic biophysical model of a layer 5 cortical pyramidal cell. We tested this biophysical perceptron (BP) on a memorization task, where it needs to correctly binarily classify 100, 1000, or 2000 patterns, and a generalization task, where it should discriminate between two “noisy” patterns. We show that the BP performs these tasks with an accuracy comparable to that of the original perceptron, though the memorization capacity of the apical tuft is somewhat limited. We concluded that cortical pyramidal neurons can act as powerful classification devices.


Author(s):  
Chi Qiao ◽  
Andrew T. Myers

Abstract Surrogate modeling of the variability of metocean conditions in space and in time during hurricanes is a crucial task for risk analysis on offshore structures such as offshore wind turbines, which are deployed over a large area. This task is challenging because of the complex nature of the meteorology-metocean interaction in addition to the time-dependence and high-dimensionality of the output. In this paper, spatio-temporal characteristics of surrogate models, such as Deep Neural Networks, are analyzed based on an offshore multi-hazard database created by the authors. The focus of this paper is two-fold: first, the effectiveness of dimension reduction techniques for representing high-dimensional output distributed in space is investigated and, second, an overall approach to estimate spatio-temporal characteristics of hurricane hazards using Deep Neural Networks is presented. The popular dimension reduction technique, Principal Component Analysis, is shown to perform similarly compared to a simpler dimension reduction approach and to not perform as well as a surrogate model implemented without dimension reduction. Discussions are provided to explain why the performance of Principal Component Analysis is only mediocre in this implementation and why dimension reduction might not be necessary.


Sign in / Sign up

Export Citation Format

Share Document