Demonstrating Advanced Machine Learning and Neuromorphic Computing Using IBM’s NS16e

Author(s):  
Mark Barnell ◽  
Courtney Raymond ◽  
Matthew Wilson ◽  
Darrek Isereau ◽  
Eric Cote ◽  
...  



Author(s):  
Pranava Bhat

The domain of engineering has always taken inspiration from the biological world. Understanding the functionalities of the human brain is one of the key areas of interest over time and has caused many advancements in the field of computing systems. The computational capability per unit power per unit volume of the human brain exceeds the current best supercomputers. Mimicking the physics of computations used by the nervous system and the brain can bring a paradigm shift to the computing systems. The concept of bridging computing and neural systems can be termed as neuromorphic computing and it is bringing revolutionary changes in the computing hardware. Neuromorphic computing systems have seen swift progress in the past decades. Many organizations have introduced a variety of designs, implementation methodologies and prototype chips. This paper discusses the parameters that are considered in the advanced neuromorphic computing systems and the tradeoffs between them. There have been attempts made to make computer models of neurons. Advancements in the hardware implementation are fuelling the applications in the field of machine learning. This paper presents the applications of these modern computing systems in Machine Learning.



2021 ◽  
Vol 17 (4) ◽  
pp. 1-27
Author(s):  
Shihao Song ◽  
Jui Hanamshet ◽  
Adarsha Balaji ◽  
Anup Das ◽  
Jeffrey L. Krichmar ◽  
...  

Neuromorphic computing systems execute machine learning tasks designed with spiking neural networks. These systems are embracing non-volatile memory to implement high-density and low-energy synaptic storage. Elevated voltages and currents needed to operate non-volatile memories cause aging of CMOS-based transistors in each neuron and synapse circuit in the hardware, drifting the transistor’s parameters from their nominal values. If these circuits are used continuously for too long, the parameter drifts cannot be reversed, resulting in permanent degradation of circuit performance over time, eventually leading to hardware faults. Aggressive device scaling increases power density and temperature, which further accelerates the aging, challenging the reliable operation of neuromorphic systems. Existing reliability-oriented techniques periodically de-stress all neuron and synapse circuits in the hardware at fixed intervals, assuming worst-case operating conditions, without actually tracking their aging at run-time. To de-stress these circuits, normal operation must be interrupted, which introduces latency in spike generation and propagation, impacting the inter-spike interval and hence, performance (e.g., accuracy). We observe that in contrast to long-term aging, which permanently damages the hardware, short-term aging in scaled CMOS transistors is mostly due to bias temperature instability. The latter is heavily workload-dependent and, more importantly, partially reversible. We propose a new architectural technique to mitigate the aging-related reliability problems in neuromorphic systems by designing an intelligent run-time manager (NCRTM), which dynamically de-stresses neuron and synapse circuits in response to the short-term aging in their CMOS transistors during the execution of machine learning workloads, with the objective of meeting a reliability target. NCRTM de-stresses these circuits only when it is absolutely necessary to do so, otherwise reducing the performance impact by scheduling de-stress operations off the critical path. We evaluate NCRTM with state-of-the-art machine learning workloads on a neuromorphic hardware. Our results demonstrate that NCRTM significantly improves the reliability of neuromorphic hardware, with marginal impact on performance.





Author(s):  
Mark A. Hughes ◽  
Mike J. Shipston ◽  
Alan F. Murray

Electronic signals govern the function of both nervous systems and computers, albeit in different ways. As such, hybridizing both systems to create an iono-electric brain–computer interface is a realistic goal; and one that promises exciting advances in both heterotic computing and neuroprosthetics capable of circumventing devastating neuropathology. ‘Neural networks’ were, in the 1980s, viewed naively as a potential panacea for all computational problems that did not fit well with conventional computing. The field bifurcated during the 1990s into a highly successful and much more realistic machine learning community and an equally pragmatic, biologically oriented ‘neuromorphic computing’ community. Algorithms found in nature that use the non-synchronous, spiking nature of neuronal signals have been found to be (i) implementable efficiently in silicon and (ii) computationally useful. As a result, interest has grown in techniques that could create mixed ‘siliconeural’ computers. Here, we discuss potential approaches and focus on one particular platform using parylene-patterned silicon dioxide.



2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.



2020 ◽  
Author(s):  
Man-Wai Mak ◽  
Jen-Tzung Chien


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  


2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  


Sign in / Sign up

Export Citation Format

Share Document