computational neuroscience
Recently Published Documents


TOTAL DOCUMENTS

458
(FIVE YEARS 104)

H-INDEX

24
(FIVE YEARS 4)

2022 ◽  
pp. 86-97
Author(s):  
Hitesh Marwaha ◽  
Anurag Sharma ◽  
Vikrant Sharma

Neuroscience is the study of the brain and its impact on behavior and cognitive functions. Computational neuroscience is the subfield that deals with the study of the ability of the brain to think and compute. It also analyzes various electrical and chemical signals that take place in the brain to represent and process the information. In this chapter, a special focus will be given on the processing of signals by the brain to solve the problems. In the second section of the chapter, the role of graph theory is discussed to analyze the pattern of neurons. Graph-based analysis reveals meaningful information about the topological architecture of human brain networks. The graph-based analysis also discloses the networks in which most nodes are not neighbors of each other but can be reached from every other node by a small number of steps. In the end, it is concluded that by using the various operations of graph theory, the vertex centrality, betweenness, etc. can be computed to identify the dominant neurons for solving different types of computational problems.


2021 ◽  
Vol 15 ◽  
Author(s):  
Paul Tschirhart ◽  
Ken Segall

Superconducting electronics (SCE) is uniquely suited to implement neuromorphic systems. As a result, SCE has the potential to enable a new generation of neuromorphic architectures that can simultaneously provide scalability, programmability, biological fidelity, on-line learning support, efficiency and speed. Supporting all of these capabilities simultaneously has thus far proven to be difficult using existing semiconductor technologies. However, as the fields of computational neuroscience and artificial intelligence (AI) continue to advance, the need for architectures that can provide combinations of these capabilities will grow. In this paper, we will explain how superconducting electronics could be used to address this need by combining analog and digital SCE circuits to build large scale neuromorphic systems. In particular, we will show through detailed analysis that the available SCE technology is suitable for near term neuromorphic demonstrations. Furthermore, this analysis will establish that neuromorphic architectures built using SCE will have the potential to be significantly faster and more efficient than current approaches, all while supporting capabilities such as biologically suggestive neuron models and on-line learning. In the future, SCE-based neuromorphic systems could serve as experimental platforms supporting investigations that are not feasible with current approaches. Ultimately, these systems and the experiments that they support would enable the advancement of neuroscience and the development of more sophisticated AI.


2021 ◽  
Author(s):  
Giuseppe de Alteriis ◽  
Enrico Cataldo ◽  
Alberto Mazzoni ◽  
Calogero Maria Oddo

The Izhikevich artificial spiking neuron model is among the most employed models in neuromorphic engineering and computational neuroscience, due to the affordable computational effort to discretize it and its biological plausibility. It has been adopted also for applications with limited computational resources in embedded systems. It is important therefore to realize a compromise between error and computational expense to solve numerically the model's equations. Here we investigate the effects of discretization and we study the solver that realizes the best compromise between accuracy and computational cost, given an available amount of Floating Point Operations per Second (FLOPS). We considered three frequently used fixed step Ordinary Differential Equations (ODE) solvers in computational neuroscience: Euler method, the Runge-Kutta 2 method and the Runge-Kutta 4 method. To quantify the error produced by the solvers, we used the Victor Purpura spike train Distance from an ideal solution of the ODE. Counterintuitively, we found that simple methods such as Euler and Runge Kutta 2 can outperform more complex ones (i.e. Runge Kutta 4) in the numerical solution of the Izhikevich model if the same FLOPS are allocated in the comparison. Moreover, we quantified the neuron rest time (with input under threshold resulting in no output spikes) necessary for the numerical solution to converge to the ideal solution and therefore to cancel the error accumulated during the spike train; in this analysis we found that the required rest time is independent from the firing rate and the spike train duration. Our results can generalize in a straightforward manner to other spiking neuron models and provide a systematic analysis of fixed step neural ODE solvers towards a trade-off between accuracy in the discretization and computational cost.


2021 ◽  
pp. 217-234
Author(s):  
Dangi Sarishma ◽  
Sumitra Sangwan ◽  
Ravi Tomar ◽  
Rohit Srivastava

2021 ◽  
Vol 3 ◽  
Author(s):  
Yann Beilliard ◽  
Fabien Alibart

Neuromorphic computing based on spiking neural networks has the potential to significantly improve on-line learning capabilities and energy efficiency of artificial intelligence, specially for edge computing. Recent progress in computational neuroscience have demonstrated the importance of heterosynaptic plasticity for network activity regulation and memorization. Implementing heterosynaptic plasticity in hardware is thus highly desirable, but important materials and engineering challenges remain, calling for breakthroughs in neuromorphic devices. In this mini-review, we propose an overview of the latest advances in multi-terminal memristive devices on silicon with tunable synaptic plasticity, enabling heterosynaptic plasticity in hardware. The scalability and compatibility of the devices with industrial complementary metal oxide semiconductor (CMOS) technologies are discussed.


2021 ◽  
pp. 1-46
Author(s):  
João Angelo Ferres Brogin ◽  
Jean Faber ◽  
Douglas Domingues Bueno

Abstract Epilepsy is one of the most common brain disorders worldwide, affecting millions of people every year. Although significant effort has been put into better understanding it and mitigating its effects, the conventional treatments are not fully effective. Advances in computational neuroscience, using mathematical dynamic models that represent brain activities at different scales, have enabled addressing epilepsy from a more theoretical standpoint. In particular, the recently proposed Epileptor model stands out among these models, because it represents well the main features of seizures, and the results from its simulations have been consistent with experimental observations. In addition, there has been an increasing interest in designing control techniques for Epileptor that might lead to possible realistic feedback controllers in the future. However, such approaches rely on knowing all of the states of the model, which is not the case in practice. The work explored in this letter aims to develop a state observer to estimate Epileptor's unmeasurable variables, as well as reconstruct the respective so-called bursters. Furthermore, an alternative modeling is presented for enhancing the convergence speed of an observer. The results show that the proposed approach is efficient under two main conditions: when the brain is undergoing a seizure and when a transition from the healthy to the epileptiform activity occurs.


2021 ◽  
Author(s):  
Masaru Kondo

We propose a mathematical model for quantifying willpower and an application based on the model. Volitional Motion Theory (VMT) is a mathematical model that draws on classical mechanics, thermodynamics, statistical mechanics, information theory, and philosophy. The resulting numbers are statistical theoretical values deduced using observable data. VMT can be applied to a variety of fields, including behavioral science, behavioral economics, and computational neuroscience. For example, "What is animal spirit in economics?" VMT is one proposal to answer this question. In addition, a scheduling application has been created to validate VMT. This application is open to the public for anyone to use.


2021 ◽  
Author(s):  
Nimrod Shaham ◽  
Jay Chandra ◽  
Gabriel Kreiman ◽  
Haim Sompolinsky

Humans have the remarkable ability to continually store new memories, while maintaining old memories for a lifetime. How the brain avoids catastrophic forgetting of memories due to interference between encoded memories is an open problem in computational neuroscience. Here we present a model for continual learning in a recurrent neural network combining Hebbian learning, synaptic decay and a novel memory consolidation mechanism. Memories undergo stochastic rehearsals with rates proportional to the memory's basin of attraction, causing self-amplified consolidation, giving rise to memory lifetimes that extend much longer than synaptic decay time, and capacity proportional to a power of the number of neurons. Perturbations to the circuit model cause temporally-graded retrograde and anterograde deficits, mimicking observed memory impairments following neurological trauma.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Erik D. Fagerholm ◽  
W. M. C. Foulkes ◽  
Karl J. Friston ◽  
Rosalyn J. Moran ◽  
Robert Leech

AbstractThe principle of stationary action is a cornerstone of modern physics, providing a powerful framework for investigating dynamical systems found in classical mechanics through to quantum field theory. However, computational neuroscience, despite its heavy reliance on concepts in physics, is anomalous in this regard as its main equations of motion are not compatible with a Lagrangian formulation and hence with the principle of stationary action. Taking the Dynamic Causal Modelling (DCM) neuronal state equation as an instructive archetype of the first-order linear differential equations commonly found in computational neuroscience, we show that it is possible to make certain modifications to this equation to render it compatible with the principle of stationary action. Specifically, we show that a Lagrangian formulation of the DCM neuronal state equation is facilitated using a complex dependent variable, an oscillatory solution, and a Hermitian intrinsic connectivity matrix. We first demonstrate proof of principle by using Bayesian model inversion to show that both the original and modified models can be correctly identified via in silico data generated directly from their respective equations of motion. We then provide motivation for adopting the modified models in neuroscience by using three different types of publicly available in vivo neuroimaging datasets, together with open source MATLAB code, to show that the modified (oscillatory) model provides a more parsimonious explanation for some of these empirical timeseries. It is our hope that this work will, in combination with existing techniques, allow people to explore the symmetries and associated conservation laws within neural systems – and to exploit the computational expediency facilitated by direct variational techniques.


Sign in / Sign up

Export Citation Format

Share Document