neuromorphic hardware
Recently Published Documents


TOTAL DOCUMENTS

249
(FIVE YEARS 145)

H-INDEX

19
(FIVE YEARS 6)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 440
Author(s):  
Anup Vanarse ◽  
Adam Osseiran ◽  
Alexander Rassau ◽  
Peter van der Made

Current developments in artificial olfactory systems, also known as electronic nose (e-nose) systems, have benefited from advanced machine learning techniques that have significantly improved the conditioning and processing of multivariate feature-rich sensor data. These advancements are complemented by the application of bioinspired algorithms and architectures based on findings from neurophysiological studies focusing on the biological olfactory pathway. The application of spiking neural networks (SNNs), and concepts from neuromorphic engineering in general, are one of the key factors that has led to the design and development of efficient bioinspired e-nose systems. However, only a limited number of studies have focused on deploying these models on a natively event-driven hardware platform that exploits the benefits of neuromorphic implementation, such as ultra-low-power consumption and real-time processing, for simplified integration in a portable e-nose system. In this paper, we extend our previously reported neuromorphic encoding and classification approach to a real-world dataset that consists of sensor responses from a commercial e-nose system when exposed to eight different types of malts. We show that the proposed SNN-based classifier was able to deliver 97% accurate classification results at a maximum latency of 0.4 ms per inference with a power consumption of less than 1 mW when deployed on neuromorphic hardware. One of the key advantages of the proposed neuromorphic architecture is that the entire functionality, including pre-processing, event encoding, and classification, can be mapped on the neuromorphic system-on-a-chip (NSoC) to develop power-efficient and highly-accurate real-time e-nose systems.


2021 ◽  
Vol 23 (6) ◽  
pp. 285-294
Author(s):  
N.V. Andreeva ◽  
◽  
V.V. Luchinin ◽  
E.A. Ryndin ◽  
M.G. Anchkov ◽  
...  

Memristive neuromorphic chips exploit a prospective class of novel functional materials (memristors) to deploy a new architecture of spiking neural networks for developing basic blocks of brain-like systems. Memristor-based neuromorphic hardware solutions for multi-agent systems are considered as challenges in frontier areas of chip design for fast and energy-efficient computing. As functional materials, metal oxide thin films with resistive switching and memory effects (memristive structures) are recognized as a potential elemental base for new components of neuromorphic engineering, enabling a combination of both data storage and processing in a single unit. A key design issue in this case is a hardware defined functionality of neural networks. The gradient change of resistive properties of memristive elements and its non-volatile memory behavior ensure the possibility of spiking neural network organization with unsupervised learning through hardware implementation of basic synaptic mechanisms, such as Hebb's learning rules including spike — timing dependent plasticity, long-term potentiation and depression. This paper provides an overview of scientific researches carrying out at Saint Petersburg Electrotechnical University "LETI" since 2014 in the field of novel electronic components for neuromorphic hardware solutions of brain-like chip design. Among the most promising concepts developed by ETU "LETI" are: the design of metal-insulator-metal structures exhibiting multilevel resistive switching (gradient tuning of resistive properties and bipolar resistive switching are combined together in a sin¬gle memristive element) for further use as artificial synaptic devices in neuromorphic chips; computing schemes for spatio-temporal pattern recognition based on spiking neural network architecture implementation; breadboard models of analogue circuits of hardware implementation of neuromorphic blocks for brain-like system developing.


2021 ◽  
Author(s):  
Ceca Kraišniković ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware – neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


Author(s):  
Catherine Schuman ◽  
Robert Patton ◽  
Shruti Kulkarni ◽  
Maryam Parsa ◽  
Christopher Stahl ◽  
...  

Abstract Neuromorphic computing offers the opportunity to implement extremely low power artificial intelligence at the edge. Control applications, such as autonomous vehicles and robotics, are also of great interest for neuromorphic systems at the edge. It is not clear, however, what the best neuromorphic training approaches are for control applications at the edge. In this work, we implement and compare the performance of evolutionary optimization and imitation learning approaches on an autonomous race car control task using an edge neuromorphic implementation. We show that the evolutionary approaches tend to achieve better performing smaller network sizes that are well-suited to edge deployment, but they also take significantly longer to train. We also describe a workflow to allow for future algorithmic comparisons for neuromorphic hardware on control applications at the edge.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3237
Author(s):  
Alexander Sboev ◽  
Danila Vlasov ◽  
Roman Rybka ◽  
Yury Davydov ◽  
Alexey Serenko ◽  
...  

The problem with training spiking neural networks (SNNs) is relevant due to the ultra-low power consumption these networks could exhibit when implemented in neuromorphic hardware. The ongoing progress in the fabrication of memristors, a prospective basis for analogue synapses, gives relevance to studying the possibility of SNN learning on the base of synaptic plasticity models, obtained by fitting the experimental measurements of the memristor conductance change. The dynamics of memristor conductances is (necessarily) nonlinear, because conductance changes depend on the spike timings, which neurons emit in an all-or-none fashion. The ability to solve classification tasks was previously shown for spiking network models based on the bio-inspired local learning mechanism of spike-timing-dependent plasticity (STDP), as well as with the plasticity that models the conductance change of nanocomposite (NC) memristors. Input data were presented to the network encoded into the intensities of Poisson input spike sequences. This work considers another approach for encoding input data into input spike sequences presented to the network: temporal encoding, in which an input vector is transformed into relative timing of individual input spikes. Since temporal encoding uses fewer input spikes, the processing of each input vector by the network can be faster and more energy-efficient. The aim of the current work is to show the applicability of temporal encoding to training spiking networks with three synaptic plasticity models: STDP, NC memristor approximation, and PPX memristor approximation. We assess the accuracy of the proposed approach on several benchmark classification tasks: Fisher’s Iris, Wisconsin breast cancer, and the pole balancing task (CartPole). The accuracies achieved by SNN with memristor plasticity and conventional STDP are comparable and are on par with classic machine learning approaches.


2021 ◽  
Vol 15 ◽  
Author(s):  
Youngeun Kim ◽  
Priyadarshini Panda

Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning owing to sparse, asynchronous and binary event (or spike) driven processing, that can yield huge energy efficiency benefits on neuromorphic hardware. However, SNNs convey temporally-varying spike activation through time that is likely to induce a large variation of forward activation and backward gradients, resulting in unstable training. To address this training issue in SNNs, we revisit Batch Normalization (BN) and propose a temporal Batch Normalization Through Time (BNTT) technique. Different from previous BN techniques with SNNs, we find that varying the BN parameters at every time-step allows the model to learn the time-varying input distribution better. Specifically, our proposed BNTT decouples the parameters in a BNTT layer along the time axis to capture the temporal dynamics of spikes. We demonstrate BNTT on CIFAR-10, CIFAR-100, Tiny-ImageNet, event-driven DVS-CIFAR10 datasets, and Sequential MNIST and show near state-of-the-art performance. We conduct comprehensive analysis on the temporal characteristic of BNTT and showcase interesting benefits toward robustness against random and adversarial noise. Further, by monitoring the learnt parameters of BNTT, we find that we can do temporal early exit. That is, we can reduce the inference latency by ~5 − 20 time-steps from the original training latency. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/BNTT-Batch-Normalization-Through-Time.


2021 ◽  
Vol 15 ◽  
Author(s):  
Nicholas LeBow ◽  
Bodo Rueckauer ◽  
Pengfei Sun ◽  
Meritxell Rovira ◽  
Cecilia Jiménez-Jorquera ◽  
...  

Liquid analysis is key to track conformity with the strict process quality standards of sectors like food, beverage, and chemical manufacturing. In order to analyse product qualities online and at the very point of interest, automated monitoring systems must satisfy strong requirements in terms of miniaturization, energy autonomy, and real time operation. Toward this goal, we present the first implementation of artificial taste running on neuromorphic hardware for continuous edge monitoring applications. We used a solid-state electrochemical microsensor array to acquire multivariate, time-varying chemical measurements, employed temporal filtering to enhance sensor readout dynamics, and deployed a rate-based, deep convolutional spiking neural network to efficiently fuse the electrochemical sensor data. To evaluate performance we created MicroBeTa (Microsensor Beverage Tasting), a new dataset for beverage classification incorporating 7 h of temporal recordings performed over 3 days, including sensor drifts and sensor replacements. Our implementation of artificial taste is 15× more energy efficient on inference tasks than similar convolutional architectures running on other commercial, low power edge-AI inference devices, achieving over 178× lower latencies than the sampling period of the sensor readout, and high accuracy (97%) on a single Intel Loihi neuromorphic research processor included in a USB stick form factor.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Julian Büchel ◽  
Dmitrii Zendrikov ◽  
Sergio Solinas ◽  
Giacomo Indiveri ◽  
Dylan R. Muir

AbstractMixed-signal analog/digital circuits emulate spiking neurons and synapses with extremely high energy efficiency, an approach known as “neuromorphic engineering”. However, analog circuits are sensitive to process-induced variation among transistors in a chip (“device mismatch”). For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses. Each chip exhibits a different distribution of neural parameters, causing deployed networks to respond differently between chips. Current solutions to mitigate mismatch based on per-chip calibration or on-chip learning entail increased design complexity, area and cost, making deployment of neuromorphic devices expensive and difficult. Here we present a supervised learning approach that produces SNNs with high robustness to mismatch and other common sources of noise. Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory. We demonstrate our method on two tasks requiring temporal memory, and measure the robustness of our approach to several forms of noise and mismatch. We show that our approach is more robust than common alternatives for training SNNs. Our method provides robust deployment of pre-trained networks on mixed-signal neuromorphic hardware, without requiring per-device training or calibration.


2021 ◽  
Vol 15 ◽  
Author(s):  
Margot Wagner ◽  
Thomas M. Bartol ◽  
Terrence J. Sejnowski ◽  
Gert Cauwenberghs

Progress in computational neuroscience toward understanding brain function is challenged both by the complexity of molecular-scale electrochemical interactions at the level of individual neurons and synapses and the dimensionality of network dynamics across the brain covering a vast range of spatial and temporal scales. Our work abstracts an existing highly detailed, biophysically realistic 3D reaction-diffusion model of a chemical synapse to a compact internal state space representation that maps onto parallel neuromorphic hardware for efficient emulation at a very large scale and offers near-equivalence in input-output dynamics while preserving biologically interpretable tunable parameters.


Author(s):  
Anna-Maria Jürgensen ◽  
Afshin Khalili ◽  
Elisabetta Chicca ◽  
Giacomo Indiveri ◽  
Martin Paul Nawrot

Abstract Animal nervous systems are highly efficient in processing sensory input. The neuromorphic computing paradigm aims at the hardware implementation of neural network computations to support novel solutions for building brain-inspired computing systems. Here, we take inspiration from sensory processing in the nervous system of the fruit fly larva. With its strongly limited computational resources of <200 neurons and <1.000 synapses the larval olfactory pathway employs fundamental computations to transform broadly tuned receptor input at the periphery into an energy efficient sparse code in the central brain. We show how this approach allows us to achieve sparse coding and increased separability of stimulus patterns in a spiking neural network, validated with both software simulation and hardware emulation on mixed-signal real-time neuromorphic hardware. We verify that feedback inhibition is the central motif to support sparseness in the spatial domain, across the neuron population, while the combination of spike frequency adaptation and feedback inhibition determines sparseness in the temporal domain. Our experiments demonstrate that such small-sized, biologically realistic neural networks, efficiently implemented on neuromorphic hardware, can achieve parallel processing and efficient encoding of sensory input at full temporal resolution.


Sign in / Sign up

Export Citation Format

Share Document