scholarly journals Quantized Neural Networks and Neuromorphic Computing for Embedded Systems

Author(s):  
Shiya Liu ◽  
Yang Yi
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jonathan K. George ◽  
Cesare Soci ◽  
Mario Miscuglio ◽  
Volker J. Sorger

AbstractMirror symmetry is an abundant feature in both nature and technology. Its successful detection is critical for perception procedures based on visual stimuli and requires organizational processes. Neuromorphic computing, utilizing brain-mimicked networks, could be a technology-solution providing such perceptual organization functionality, and furthermore has made tremendous advances in computing efficiency by applying a spiking model of information. Spiking models inherently maximize efficiency in noisy environments by placing the energy of the signal in a minimal time. However, many neuromorphic computing models ignore time delay between nodes, choosing instead to approximate connections between neurons as instantaneous weighting. With this assumption, many complex time interactions of spiking neurons are lost. Here, we show that the coincidence detection property of a spiking-based feed-forward neural network enables mirror symmetry. Testing this algorithm exemplary on geospatial satellite image data sets reveals how symmetry density enables automated recognition of man-made structures over vegetation. We further demonstrate that the addition of noise improves feature detectability of an image through coincidence point generation. The ability to obtain mirror symmetry from spiking neural networks can be a powerful tool for applications in image-based rendering, computer graphics, robotics, photo interpretation, image retrieval, video analysis and annotation, multi-media and may help accelerating the brain-machine interconnection. More importantly it enables a technology pathway in bridging the gap between the low-level incoming sensor stimuli and high-level interpretation of these inputs as recognized objects and scenes in the world.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 396 ◽  
Author(s):  
Errui Zhou ◽  
Liang Fang ◽  
Binbin Yang

Neuromorphic computing systems are promising alternatives in the fields of pattern recognition, image processing, etc. especially when conventional von Neumann architectures face several bottlenecks. Memristors play vital roles in neuromorphic computing systems and are usually used as synaptic devices. Memristive spiking neural networks (MSNNs) are considered to be more efficient and biologically plausible than other systems due to their spike-based working mechanism. In contrast to previous SNNs with complex architectures, we propose a hardware-friendly architecture and an unsupervised spike-timing dependent plasticity (STDP) learning method for MSNNs in this paper. The architecture, which is friendly to hardware implementation, includes an input layer, a feature learning layer and a voting circuit. To reduce hardware complexity, some constraints are enforced: the proposed architecture has no lateral inhibition and is purely feedforward; it uses the voting circuit as a classifier and does not use additional classifiers; all neurons can generate at most one spike and do not need to consider firing rates and refractory periods; all neurons have the same fixed threshold voltage for classification. The presented unsupervised STDP learning method is time-dependent and uses no homeostatic mechanism. The MNIST dataset is used to demonstrate our proposed architecture and learning method. Simulation results show that our proposed architecture with the learning method achieves a classification accuracy of 94.6%, which outperforms other unsupervised SNNs that use time-based encoding schemes.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Wei Wang ◽  
Yiyang Hu ◽  
Ting Zou ◽  
Hongmei Liu ◽  
Jin Wang ◽  
...  

Because deep neural networks (DNNs) are both memory-intensive and computation-intensive, they are difficult to apply to embedded systems with limited hardware resources. Therefore, DNN models need to be compressed and accelerated. By applying depthwise separable convolutions, MobileNet can decrease the number of parameters and computational complexity with less loss of classification precision. Based on MobileNet, 3 improved MobileNet models with local receptive field expansion in shallow layers, also called Dilated-MobileNet (Dilated Convolution MobileNet) models, are proposed, in which dilated convolutions are introduced into a specific convolutional layer of the MobileNet model. Without increasing the number of parameters, dilated convolutions are used to increase the receptive field of the convolution filters to obtain better classification accuracy. The experiments were performed on the Caltech-101, Caltech-256, and Tubingen animals with attribute datasets, respectively. The results show that Dilated-MobileNets can obtain up to 2% higher classification accuracy than MobileNet.


2019 ◽  
Vol 63 (7) ◽  
pp. 1099-1108
Author(s):  
Salem Ameen ◽  
Sunil Vadera

Abstract The successful application of deep learning has led to increasing expectations of their use in embedded systems. This, in turn, has created the need to find ways of reducing the size of neural networks. Decreasing the size of a neural network requires deciding which weights should be removed without compromising accuracy, which is analogous to the kind of problems addressed by multi-armed bandits (MABs). Hence, this paper explores the use of MABs for reducing the number of parameters of a neural network. Different MAB algorithms, namely $\epsilon $-greedy, win-stay, lose-shift, UCB1, KL-UCB, BayesUCB, UGapEb, successive rejects and Thompson sampling are evaluated and their performance compared to existing approaches. The results show that MAB pruning methods, especially those based on UCB, outperform other pruning methods.


2016 ◽  
Vol 26 (01) ◽  
pp. 1750015 ◽  
Author(s):  
İsmail Koyuncu ◽  
İbrahim Şahin ◽  
Clay Gloster ◽  
Namık Kemal Sarıtekin

Artificial neural networks (ANNs) are implemented in hardware when software implementations are inadequate in terms of performance. Implementing an ANN as hardware without using design automation tools is a time consuming process. On the other hand, this process can be automated using pre-designed neurons. Thus, in this work, several artificial neural cells were designed and implemented to form a library of neurons for rapid realization of ANNs on FPGA-based embedded systems. The library contains a total of 60 different neurons, two-, four- and six-input biased and non-biased, with each having 10 different activation functions. The neurons are highly pipelined and were designed to be connected to each other like Lego pieces. Chip statistics of the neurons showed that depending on the type of the neuron, about 25 selected neurons can be fit in to the smallest Virtex-6 chip and an ANN formed using the neurons can be clocked up to 576.89[Formula: see text]MHz. ANN based Rössler system was constructed to show the effectiveness of using neurons in rapid realization of ANNs on embedded systems. Our experiments with the neurons showed that using these neurons, ANNs can rapidly be implemented as hardware and design time can significantly be reduced.


Sign in / Sign up

Export Citation Format

Share Document