scholarly journals Memristive Spiking Neural Networks Trained with Unsupervised STDP

Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 396 ◽  
Author(s):  
Errui Zhou ◽  
Liang Fang ◽  
Binbin Yang

Neuromorphic computing systems are promising alternatives in the fields of pattern recognition, image processing, etc. especially when conventional von Neumann architectures face several bottlenecks. Memristors play vital roles in neuromorphic computing systems and are usually used as synaptic devices. Memristive spiking neural networks (MSNNs) are considered to be more efficient and biologically plausible than other systems due to their spike-based working mechanism. In contrast to previous SNNs with complex architectures, we propose a hardware-friendly architecture and an unsupervised spike-timing dependent plasticity (STDP) learning method for MSNNs in this paper. The architecture, which is friendly to hardware implementation, includes an input layer, a feature learning layer and a voting circuit. To reduce hardware complexity, some constraints are enforced: the proposed architecture has no lateral inhibition and is purely feedforward; it uses the voting circuit as a classifier and does not use additional classifiers; all neurons can generate at most one spike and do not need to consider firing rates and refractory periods; all neurons have the same fixed threshold voltage for classification. The presented unsupervised STDP learning method is time-dependent and uses no homeostatic mechanism. The MNIST dataset is used to demonstrate our proposed architecture and learning method. Simulation results show that our proposed architecture with the learning method achieves a classification accuracy of 94.6%, which outperforms other unsupervised SNNs that use time-based encoding schemes.

2021 ◽  
Author(s):  
Shiva Subbulakshmi Radhakrishnan ◽  
Akhil Dodda ◽  
Saptarshi Das

Abstract In spite of recent advancements in bio-realistic artificial neural networks such as spiking neural networks (SNNs), the energy efficiency, multifunctionality, adaptability, and integrated nature of biological neural networks (BNNs) largely remain unimitated in hardware neuromorphic computing systems. Here we exploit optoelectronic and programmable memory devices based on emerging two-dimensional (2D) layered materials such as MoS2 to demonstrate an “all-in-one” hardware SNN system which is capable of sensing, encoding, unsupervised learning, and inference at miniscule energy expenditure. In short, we have utilized photogating effect in MoS2 based neuromorphic phototransistor for sensing and direct encoding of analog optical information into graded spike trains, we have designed MoS2 based neuromorphic encoding module for conversion of spike trains into spike-count and spike-timing based programming voltages, and finally we have used arrays of programmable MoS2 non-volatile synapses for spike-based unsupervised learning and inference. We also demonstrate adaptability of our SNN for learning under scotopic (low-light) and photopic (bright-light) conditions mimicking neuroplasticity of BNNs. Furthermore, we use our hardware SNN platform to show learning challenges under specific synaptic conditions, which can aid in understanding learning disabilities in BNNs. Our findings highlight the potential of in-memory computing and sensing based on emerging 2D materials, devices, and circuits not only to overcome the bottleneck of von Neumann computing in conventional CMOS designs but also aid in eliminating peripheral components necessary for competing technologies such as memristors, RRAM, PCM, etc. as well as bridge the understanding between neuroscience of learning and machine learning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jonathan K. George ◽  
Cesare Soci ◽  
Mario Miscuglio ◽  
Volker J. Sorger

AbstractMirror symmetry is an abundant feature in both nature and technology. Its successful detection is critical for perception procedures based on visual stimuli and requires organizational processes. Neuromorphic computing, utilizing brain-mimicked networks, could be a technology-solution providing such perceptual organization functionality, and furthermore has made tremendous advances in computing efficiency by applying a spiking model of information. Spiking models inherently maximize efficiency in noisy environments by placing the energy of the signal in a minimal time. However, many neuromorphic computing models ignore time delay between nodes, choosing instead to approximate connections between neurons as instantaneous weighting. With this assumption, many complex time interactions of spiking neurons are lost. Here, we show that the coincidence detection property of a spiking-based feed-forward neural network enables mirror symmetry. Testing this algorithm exemplary on geospatial satellite image data sets reveals how symmetry density enables automated recognition of man-made structures over vegetation. We further demonstrate that the addition of noise improves feature detectability of an image through coincidence point generation. The ability to obtain mirror symmetry from spiking neural networks can be a powerful tool for applications in image-based rendering, computer graphics, robotics, photo interpretation, image retrieval, video analysis and annotation, multi-media and may help accelerating the brain-machine interconnection. More importantly it enables a technology pathway in bridging the gap between the low-level incoming sensor stimuli and high-level interpretation of these inputs as recognized objects and scenes in the world.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1526 ◽  
Author(s):  
Choongmin Kim ◽  
Jacob A. Abraham ◽  
Woochul Kang ◽  
Jaeyong Chung

Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.


Author(s):  
Xiumin Li ◽  
Qing Chen ◽  
Fangzheng Xue

In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue ‘Mathematical methods in medicine: neuroscience, cardiology and pathology’.


2014 ◽  
Vol 144 ◽  
pp. 526-536 ◽  
Author(s):  
Jinling Wang ◽  
Ammar Belatreche ◽  
Liam Maguire ◽  
Thomas Martin McGinnity

Author(s):  
Meng Qi ◽  
Tianquan Fu ◽  
Huadong Yang ◽  
ye tao ◽  
Chunran Li ◽  
...  

Abstract Human brain synaptic memory simulation based on resistive random access memory (RRAM) has an enormous potential to replace traditional Von Neumann digital computer thanks to several advantages, including its simple structure, high-density integration, and the capability to information storage and neuromorphic computing. Herein, the reliable resistive switching (RS) behaviors of RRAM are demonstrated by engineering AlOx/HfOx bilayer structure. This allows for uniform multibit information storage. Further, the analog switching behaviors are capable of imitate several synaptic learning functions, including learning experience behaviors, short-term plasticity-long-term plasticity transition, and spike-timing-dependent-plasticity (STDP). In addition, the memristor based on STDP learning rules are implemented in image pattern recognition. These results may offer a promising potential of HfOx-based memristors for future information storage and neuromorphic computing applications.


2020 ◽  
Vol 34 (02) ◽  
pp. 1316-1323
Author(s):  
Zuozhu Liu ◽  
Thiparat Chotibut ◽  
Christopher Hillar ◽  
Shaowei Lin

Motivated by the celebrated discrete-time model of nervous activity outlined by McCulloch and Pitts in 1943, we propose a novel continuous-time model, the McCulloch-Pitts network (MPN), for sequence learning in spiking neural networks. Our model has a local learning rule, such that the synaptic weight updates depend only on the information directly accessible by the synapse. By exploiting asymmetry in the connections between binary neurons, we show that MPN can be trained to robustly memorize multiple spatiotemporal patterns of binary vectors, generalizing the ability of the symmetric Hopfield network to memorize static spatial patterns. In addition, we demonstrate that the model can efficiently learn sequences of binary pictures as well as generative models for experimental neural spike-train data. Our learning rule is consistent with spike-timing-dependent plasticity (STDP), thus providing a theoretical ground for the systematic design of biologically inspired networks with large and robust long-range sequence storage capacity.


Sign in / Sign up

Export Citation Format

Share Document