scholarly journals Emerging Artificial Neuron Devices for Probabilistic Computing

2021 ◽  
Vol 15 ◽  
Author(s):  
Zong-xiao Li ◽  
Xiao-ying Geng ◽  
Jingrui Wang ◽  
Fei Zhuge

In recent decades, artificial intelligence has been successively employed in the fields of finance, commerce, and other industries. However, imitating high-level brain functions, such as imagination and inference, pose several challenges as they are relevant to a particular type of noise in a biological neuron network. Probabilistic computing algorithms based on restricted Boltzmann machine and Bayesian inference that use silicon electronics have progressed significantly in terms of mimicking probabilistic inference. However, the quasi-random noise generated from additional circuits or algorithms presents a major challenge for silicon electronics to realize the true stochasticity of biological neuron systems. Artificial neurons based on emerging devices, such as memristors and ferroelectric field-effect transistors with inherent stochasticity can produce uncertain non-linear output spikes, which may be the key to make machine learning closer to the human brain. In this article, we present a comprehensive review of the recent advances in the emerging stochastic artificial neurons (SANs) in terms of probabilistic computing. We briefly introduce the biological neurons, neuron models, and silicon neurons before presenting the detailed working mechanisms of various SANs. Finally, the merits and demerits of silicon-based and emerging neurons are discussed, and the outlook for SANs is presented.

2020 ◽  
Author(s):  
Alexander J.E. Kell ◽  
Sophie L. Bokor ◽  
You-Nah Jeon ◽  
Tahereh Toosi ◽  
Elias B. Issa

The marmoset—a small monkey with a flat cortex—offers powerful techniques for studying neural circuits in a primate. However, it remains unclear whether brain functions typically studied in larger primates can be studied in the marmoset. Here, we asked whether the 300-gram marmosets’ perceptual and cognitive repertoire approaches human levels or is instead closer to rodents’. Using high-level visual object recognition as a testbed, we found that on the same task marmosets substantially outperformed rats and generalized far more robustly across images, all while performing ∼1000 trials/day. We then compared marmosets against the high standard of human behavior. Across the same 400 images, marmosets’ image-by-image recognition behavior was strikingly human-like—essentially as human-like as macaques’. These results demonstrate that marmosets have been substantially underestimated and that high-level abilities have been conserved across simian primates. Consequently, marmosets are a potent small model organism for visual neuroscience, and perhaps beyond.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Salvador Dura-Bernal ◽  
Benjamin A Suter ◽  
Padraig Gleeson ◽  
Matteo Cantarelli ◽  
Adrian Quintana ◽  
...  

Biophysical modeling of neuronal networks helps to integrate and interpret rapidly growing and disparate experimental datasets at multiple scales. The NetPyNE tool (www.netpyne.org) provides both programmatic and graphical interfaces to develop data-driven multiscale network models in NEURON. NetPyNE clearly separates model parameters from implementation code. Users provide specifications at a high level via a standardized declarative language, for example connectivity rules, to create millions of cell-to-cell connections. NetPyNE then enables users to generate the NEURON network, run efficiently parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for visualization and analysis – connectivity matrices, voltage traces, spike raster plots, local field potentials, and information theoretic measures. NetPyNE also facilitates model sharing by exporting and importing standardized formats (NeuroML and SONATA). NetPyNE is already being used to teach computational neuroscience students and by modelers to investigate brain regions and phenomena.


Author(s):  
Thomas P. Trappenberg

This chapter discusses the basic operation of an artificial neural network which is the major paradigm of deep learning. The name derives from an analogy to a biological brain. The discussion begins by outlining the basic operations of neurons in the brain and how these operations are abstracted by simple neuron models. It then builds networks of artificial neurons that constitute much of the recent success of AI. The focus of this chapter is on using such techniques, with subsequent consideration of their theoretical embedding.


Nanomaterials ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 2326
Author(s):  
Shania Rehman ◽  
Muhammad Farooq Khan ◽  
Mehr Khalid Rahmani ◽  
Honggyun Kim ◽  
Harshada Patil ◽  
...  

The diversity of brain functions depend on the release of neurotransmitters in chemical synapses. The back gated three terminal field effect transistors (FETs) are auspicious candidates for the emulation of biological functions to recognize the proficient neuromorphic computing systems. In order to encourage the hysteresis loops, we treated the bottom side of MoTe2 flake with deep ultraviolet light in ambient conditions. Here, we modulate the short-term and long-term memory effects due to the trapping and de-trapping of electron events in few layers of a MoTe2 transistor. However, MoTe2 FETs are investigated to reveal the time constants of electron trapping/de-trapping while applying the gate-voltage pulses. Our devices exploit the hysteresis effect in the transfer curves of MoTe2 FETs to explore the excitatory/inhibitory post-synaptic currents (EPSC/IPSC), long-term potentiation (LTP), long-term depression (LTD), spike timing/amplitude-dependent plasticity (STDP/SADP), and paired pulse facilitation (PPF). Further, the time constants for potentiation and depression is found to be 0.6 and 0.9 s, respectively which seems plausible for biological synapses. In addition, the change of synaptic weight in MoTe2 conductance is found to be 41% at negative gate pulse and 38% for positive gate pulse, respectively. Our findings can provide an essential role in the advancement of smart neuromorphic electronics.


2020 ◽  
Vol 10 (6) ◽  
pp. 389
Author(s):  
David Sandor Kiss ◽  
Istvan Toth ◽  
Gergely Jocsak ◽  
Zoltan Barany ◽  
Tibor Bartha ◽  
...  

Anatomically, the brain is a symmetric structure. However, growing evidence suggests that certain higher brain functions are regulated by only one of the otherwise duplicated (and symmetric) brain halves. Hemispheric specialization correlates with phylogeny supporting intellectual evolution by providing an ergonomic way of brain processing. The more complex the task, the higher are the benefits of the functional lateralization (all higher functions show some degree of lateralized task sharing). Functional asymmetry has been broadly studied in several brain areas with mirrored halves, such as the telencephalon, hippocampus, etc. Despite its paired structure, the hypothalamus has been generally considered as a functionally unpaired unit, nonetheless the regulation of a vast number of strongly interrelated homeostatic processes are attributed to this relatively small brain region. In this review, we collected all available knowledge supporting the hypothesis that a functional lateralization of the hypothalamus exists. We collected and discussed findings from previous studies that have demonstrated lateralized hypothalamic control of the reproductive functions and energy expenditure. Also, sporadic data claims the existence of a partial functional asymmetry in the regulation of the circadian rhythm, body temperature and circulatory functions. This hitherto neglected data highlights the likely high-level ergonomics provided by such functional asymmetry.


2019 ◽  
Vol 30 (9) ◽  
pp. 1318-1332 ◽  
Author(s):  
Siobhán Harty ◽  
Roi Cohen Kadosh

Interindividual variability in outcomes across individuals poses great challenges for the application of noninvasive brain stimulation in psychological research. Here, we examined how the effects of high-frequency transcranial random-noise stimulation (tRNS) on sustained attention varied as a function of a well-studied electrocortical marker: spontaneous theta:beta ratio. Seventy-two participants received sham, 1-mA, and 2-mA tRNS in a double-blind, crossover manner while they performed a sustained-attention task. Receiving 1-mA tRNS was associated with improved sustained attention, whereas the effect of 2-mA tRNS was similar to the effect of sham tRNS. Furthermore, individuals’ baseline theta:beta ratio moderated the effects of 1-mA tRNS and provided explanatory power beyond baseline behavioral performance. The tRNS-related effects on sustained attention were also accompanied by reductions in theta:beta ratio. These findings impart novel insights into mechanisms underlying tRNS effects and emphasize how designing studies that link variability in cognitive outcomes to variability in neurophysiology can improve inferential power in neurocognitive research.


2007 ◽  
Vol 17 (04) ◽  
pp. 1109-1150 ◽  
Author(s):  
MAKOTO ITOH ◽  
LEON O. CHUA

Many useful and well-known image processing templates for cellular neural networks (CNN's) can be derived from neural field models, thereby providing a neural basis for the CNN paradigm. The potential ability of multitasking image processing is investigated by using these templates. Many visual illusions are simulated via CNN image processing. The ability of the CNN to mimic such high-level brain functions suggests possible applications of the CNN in cognitive engineering. Furthermore, two kinds of painting-like image processings, namely, texture generation and illustration style transformation are investigated.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Shan Pang ◽  
Xinyi Yang

In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.


Author(s):  
Jyh-Woei Lin

Emulation of the operation process in the human brain was performed by Artificial Neural Network (ANN). The new comments of this study with the concept of progressive tense like an action to new Comparison ANN with Biological Neuron Network were stated against popular opinions. However, their opinions just pointed out the role of Synapse as the weights in the framework of ANN. In this paper, another concept was better suggested. The role of Synapse should be treated as the weights of ANN, which connect two neurons of two hidden different layers. There was a new proposed opinion in this study when an accurate ANN model was built with optimal weights. The role of Synapse should be both the converting the action potential into electrical energy and chemical energy and synaptic strengthening corresponding to long-term potentiation (LTP) in Biological Neuron Network. From the concept of pharmacology, the action of updating weights with optimal values after training more data, was similar as keeping a normal converting for LTP just using medicaments for resisting some ageing brain diseases e.g. Dementia. The new proposed opinion by comparing both Neural Networks should be reasonable in this study.


2021 ◽  
pp. 1-40
Author(s):  
Cecilia Romaro ◽  
Fernando Araujo Najman ◽  
William W. Lytton ◽  
Antonio C. Roque ◽  
Salvador Dura-Bernal

Abstract The Potjans-Diesmann cortical microcircuit model is a widely used model originally implemented in NEST. Here, we reimplemented the model using NetPyNE, a high-level Python interface to the NEURON simulator, and reproduced the findings of the original publication. We also implemented a method for scaling the network size that preserves first- and second-order statistics, building on existing work on network theory. Our new implementation enabled the use of more detailed neuron models with multicompartmental morphologies and multiple biophysically realistic ion channels. This opens the model to new research, including the study of dendritic processing, the influence of individual channel parameters, the relation to local field potentials, and other multiscale interactions. The scaling method we used provides flexibility to increase or decrease the network size as needed when running these CPU-intensive detailed simulations. Finally, NetPyNE facilitates modifying or extending the model using its declarative language; optimizing model parameters; running efficient, large-scale parallelized simulations; and analyzing the model through built-in methods, including local field potential calculation and information flow measures.


Sign in / Sign up

Export Citation Format

Share Document