scholarly journals Applications of Memristors in Neural Networks and Neuromorphic Computing: A Review

2021 ◽  
Vol 11 (5) ◽  
pp. 350-356
Author(s):  
Ye-Guo Wang ◽  
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jonathan K. George ◽  
Cesare Soci ◽  
Mario Miscuglio ◽  
Volker J. Sorger

AbstractMirror symmetry is an abundant feature in both nature and technology. Its successful detection is critical for perception procedures based on visual stimuli and requires organizational processes. Neuromorphic computing, utilizing brain-mimicked networks, could be a technology-solution providing such perceptual organization functionality, and furthermore has made tremendous advances in computing efficiency by applying a spiking model of information. Spiking models inherently maximize efficiency in noisy environments by placing the energy of the signal in a minimal time. However, many neuromorphic computing models ignore time delay between nodes, choosing instead to approximate connections between neurons as instantaneous weighting. With this assumption, many complex time interactions of spiking neurons are lost. Here, we show that the coincidence detection property of a spiking-based feed-forward neural network enables mirror symmetry. Testing this algorithm exemplary on geospatial satellite image data sets reveals how symmetry density enables automated recognition of man-made structures over vegetation. We further demonstrate that the addition of noise improves feature detectability of an image through coincidence point generation. The ability to obtain mirror symmetry from spiking neural networks can be a powerful tool for applications in image-based rendering, computer graphics, robotics, photo interpretation, image retrieval, video analysis and annotation, multi-media and may help accelerating the brain-machine interconnection. More importantly it enables a technology pathway in bridging the gap between the low-level incoming sensor stimuli and high-level interpretation of these inputs as recognized objects and scenes in the world.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 396 ◽  
Author(s):  
Errui Zhou ◽  
Liang Fang ◽  
Binbin Yang

Neuromorphic computing systems are promising alternatives in the fields of pattern recognition, image processing, etc. especially when conventional von Neumann architectures face several bottlenecks. Memristors play vital roles in neuromorphic computing systems and are usually used as synaptic devices. Memristive spiking neural networks (MSNNs) are considered to be more efficient and biologically plausible than other systems due to their spike-based working mechanism. In contrast to previous SNNs with complex architectures, we propose a hardware-friendly architecture and an unsupervised spike-timing dependent plasticity (STDP) learning method for MSNNs in this paper. The architecture, which is friendly to hardware implementation, includes an input layer, a feature learning layer and a voting circuit. To reduce hardware complexity, some constraints are enforced: the proposed architecture has no lateral inhibition and is purely feedforward; it uses the voting circuit as a classifier and does not use additional classifiers; all neurons can generate at most one spike and do not need to consider firing rates and refractory periods; all neurons have the same fixed threshold voltage for classification. The presented unsupervised STDP learning method is time-dependent and uses no homeostatic mechanism. The MNIST dataset is used to demonstrate our proposed architecture and learning method. Simulation results show that our proposed architecture with the learning method achieves a classification accuracy of 94.6%, which outperforms other unsupervised SNNs that use time-based encoding schemes.


2018 ◽  
Vol 27 (4) ◽  
pp. 667-674 ◽  
Author(s):  
Ming ZHANG ◽  
Zonghua GU ◽  
Gang PAN

2021 ◽  
Author(s):  
Liwei Yang ◽  
Huaipeng Zhang ◽  
Tao Luo ◽  
Chuping Qu ◽  
Myat Thu Linn Aung ◽  
...  

Author(s):  
Mingyong Zhou

Objectives: In this paper, we present a theoretical discussion on neuromorphic computing circuit dynamic and its relations with AI deep learning neural networks. By elucidating their relations we can focus on the investigations of solving pattern classification and recognition problems in real applications from the perspectives of both logic computation view and physical circuit device views. The investigation is motivated by the design of a feasible fast and energy-efficient circuit device as well as an efficient training computation method to solve complex problems. Thus for this purpose we propose in this paper an approach to solve such problems. First, the neuromorphic computing as a new research area is introduced, including physical circuit properties, memristive device physical properties and the circuit dynamics described by the temporal and spatial (Maxwell) differential equations. Secondly, we show that by using AI deep learning neural networks to train AI neural networks we are able to derive the optimal AI neuron network weights. Last but not the least, we brief a mapping method in [1] and show in general how the neuromorphic circuit will work in practice after mapping the weights from AI deep learning neural networks into the neuromorphic circuit synapses/memristors. Conclusion: We also devote our discussions to the physical device feasibility and related matters. The method proposed in this paper is pragmatic and constructive.


2020 ◽  
Vol 10 (4) ◽  
pp. 36
Author(s):  
Taeyang Hong ◽  
Yongshin Kang ◽  
Jaeyong Chung

Deep neural networks have demonstrated impressive results in various cognitive tasks such as object detection and image classification. This paper describes a neuromorphic computing system that is designed from the ground up for energy-efficient evaluation of deep neural networks. The computing system consists of a non-conventional compiler, a neuromorphic hardware architecture, and a space-efficient microarchitecture that leverages existing integrated circuit design methodologies. The compiler takes a trained, feedforward network as input, compresses the weights linearly, and generates a time delay neural network reducing the number of connections significantly. The connections and units in the simplified network are mapped to silicon synapses and neurons. We demonstrate an implementation of the neuromorphic computing system based on a field-programmable gate array that performs image classification on the hand-wirtten 0 to 9 digits MNIST dataset with 99.37% accuracy consuming only 93uJ per image. For image classification on the colour images in 10 classes CIFAR-10 dataset, it achieves 83.43% accuracy at more than 11× higher energy-efficiency compared to a recent field-programmable gate array (FPGA)-based accelerator.


Sign in / Sign up

Export Citation Format

Share Document