scholarly journals In-Memory Computing with Resistive Memory Circuits: Status and Outlook

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1063
Author(s):  
Giacomo Pedretti ◽  
Daniele Ielmini

In-memory computing (IMC) refers to non-von Neumann architectures where data are processed in situ within the memory by taking advantage of physical laws. Among the memory devices that have been considered for IMC, the resistive switching memory (RRAM), also known as memristor, is one of the most promising technologies due to its relatively easy integration and scaling. RRAM devices have been explored for both memory and IMC applications, such as neural network accelerators and neuromorphic processors. This work presents the status and outlook on the RRAM for analog computing, where the precision of the encoded coefficients, such as the synaptic weights of a neural network, is one of the key requirements. We show the experimental study of the cycle-to-cycle variation of set and reset processes for HfO2-based RRAM, which indicate that gate-controlled pulses present the least variation in conductance. Assuming a constant variation of conductance σG, we then evaluate and compare various mapping schemes, including multilevel, binary, unary, redundant and slicing techniques. We present analytical formulas for the standard deviation of the conductance and the maximum number of bits that still satisfies a given maximum error. Finally, we discuss RRAM performance for various analog computing tasks compared to other computational memory devices. RRAM appears as one of the most promising devices in terms of scaling, accuracy and low-current operation.

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1526 ◽  
Author(s):  
Choongmin Kim ◽  
Jacob A. Abraham ◽  
Woochul Kang ◽  
Jaeyong Chung

Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.


Micromachines ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1183
Author(s):  
Siqiu Xu ◽  
Xi Li ◽  
Chenchen Xie ◽  
Houpeng Chen ◽  
Cheng Chen ◽  
...  

Computing-In-Memory (CIM), based on non-von Neumann architecture, has lately received significant attention due to its lower overhead in delay and higher energy efficiency in convolutional and fully-connected neural network computing. Growing works have given the priority to researching the array of memory and peripheral circuits to achieve multiply-and-accumulate (MAC) operation, but not enough attention has been paid to the high-precision hardware implementation of non-linear layers up to now, which still causes time overhead and power consumption. Sigmoid is a widely used non-linear activation function and most of its studies provided an approximation of the function expression rather than totally matched, inevitably leading to considerable error. To address this issue, we propose a high-precision circuit implementation of the sigmoid, matching the expression exactly for the first time. The simulation results with the SMIC 40 nm process suggest that the proposed circuit implemented high-precision sigmoid perfectly achieves the properties of the ideal sigmoid, showing the maximum error and average error between the proposed simulated sigmoid and ideal sigmoid is 2.74% and 0.21%, respectively. In addition, a multi-layer convolutional neural network based on CIM architecture employing the simulated high-precision sigmoid activation function verifies the similar recognition accuracy on the test database of handwritten digits compared to utilize the ideal sigmoid in software, with online training achieving 97.06% and with offline training achieving 97.74%.


Author(s):  
Sandip Tiwari

Information is physical, so its manipulation through devices is subject to its own mechanics: the science and engineering of behavioral description, which is intermingled with classical, quantum and statistical mechanics principles. This chapter is a unification of these principles and physical laws with their implications for nanoscale. Ideas of state machines, Church-Turing thesis and its embodiment in various state machines, probabilities, Bayesian principles and entropy in its various forms (Shannon, Boltzmann, von Neumann, algorithmic) with an eye on the principle of maximum entropy as an information manipulation tool. Notions of conservation and non-conservation are applied to example circuit forms folding in adiabatic, isothermal, reversible and irreversible processes. This brings out implications of fluctuation and transitions, the interplay of errors and stability and the energy cost of determinism. It concludes discussing networks as tools to understand information flow and decision making and with an introduction to entanglement in quantum computing.


2018 ◽  
Vol 8 (4) ◽  
pp. 34 ◽  
Author(s):  
Vishal Saxena ◽  
Xinyu Wu ◽  
Ira Srivastava ◽  
Kehan Zhu

The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.


Author(s):  
Gerardo Schneider ◽  
Alejandro Javier Hadad ◽  
Alejandra Kemerer

Resumen En este trabajo se presenta una implementación de software para la determinación del estado de plantaciones de caña de azúcar basado en el análisis de imágenes aéreas multiespectrales. En la actualidad no existen técnicas precisas para estimar objetivamente la superficie de caña caída o volcada, y esta ocasiona importantes pérdidas de productividad en la cosecha y en la industrialización. Para la realización de éste trabajo se confeccionó un dataset referencial de imágenes, y se implementó un software a partir del cual se obtuvieron indicadores propuestos como representativos del fenómeno agronómico, y se realizaron análisis de los datos generados. Además se implementó un software clasificador referencial basado en redes neuronales con el que se estimó la fortaleza de dichos indicadores y se estimó la superficie afectada en forma cuantitativa y espacial. Palabras ClavesCaña de azúcar, cuantificación, volcado, red neuronal, procesamiento de imagen   Abstract In this paper we present a software implementation for determining the status of sugarcane plantations based on the analysis of multispectral aerial images. Currently there are no precise techniques to estimate objectively the cane area fall or overturned, and this causes significant losses in crop productivity and industrialization. For the realization of this work a dataset benchmark images was made, and a software, from which were obtained representative proposed indicators for the agronomic phenomenon was implemented, and analyzes of the data generated were realized. In addition, we implemented a software benchmark classifier based on neural networks with which we estimated the strength of these indicators and the area affected was estimated quantitatively and spatially. Keywords Sugarcane, quantification, fall, neural network, image processing


2021 ◽  
pp. 1-9
Author(s):  
Yibin Deng ◽  
Xiaogang Yang ◽  
Shidong Fan ◽  
Hao Jin ◽  
Tao Su ◽  
...  

Because of the long propulsion shafting of special ships, the number of bearings is large and the number of measured bearing reaction data is small, which makes the installation of shafting difficult. To apply a small amount of measured data to the process of ship installation so as to accurately calculate the displacement value in the actual installation, this article proposes a method to calculate the displacement value of shafting intermediate bearing based on different confidence-level training samples. Taking a ro-ro ship as the research object, this research simulates the actual installation process, gives a higher confidence level to a small amount of measured data, constructs a new training sample set for machine learning, and finally obtains the genetic algorithm-backpropagation(GABP) neural network reflecting the actual installation process. At the same time, this research compares the accuracy between different confidence-level training sample shafting neural network and the shafting neural network without measured data, and the results show that the accuracy of shafting neural network with different confidence-level training samples is higher. Although as the adjustment times and the number of measured data increase, the network accuracy is significantly improved. After adding four measured data, the maximum error is within 1%, which can play a guiding role in the ship propulsion shafting alignment. Introduction With the rapid development of science and technology in the world, special ships such as engineering ships, official ships, and warships play an important role (Carrasco et al. 2020; Prill et al. 2020). Some ships of this special type are limited by various factors such as the stern line of engine room, hull stability, and operation requirements. They usually adopt the layout of middle or front engine room, which causes the propulsion system to have a longer shaft and the number of intermediate shafts and intermediate bearings exceeds two. This forms a so-called multisupport shafting (Lee et al. 2019) and it increases the difficulty of shafting alignment because of the force-coupling between the bearings (Lai et al. 2018a, 2018b). The process of the existing methods for calculating the displacement value is complex, and because of the influence of installation error and other factors, it is necessary to adjust the bearing height several times to make the bearing reaction meet the specification requirements(Kim et al. 2017, Ko et al. 2017). So how to predict the accurate displacement value of each intermediate bearing is the key to solving the problem of multisupport shafting intermediate bearing installation and calibration (Zhou et al. 2005, Xiao-fei et al. 2017).


2021 ◽  
pp. 2150168
Author(s):  
Hasan Özdoğan ◽  
Yiğit Ali Üncü ◽  
Mert Şekerci ◽  
Abdullah Kaplan

In this paper, calculations of the [Formula: see text] reaction cross-sections at 14.5 MeV have been presented by utilizing artificial neural network algorithms (ANNs). The systematics are based on the account for the non-equilibrium reaction mechanism and the corresponding analytical formulas of the pre-equilibrium exciton model. Experimental results, obtained from the EXFOR database, have been used to train the ANN with the Levenberg–Marquardt (LM) algorithm which is a feed-forward algorithm and is considered one of the well-known and most effective methods in neural networks. The Regression [Formula: see text] values for the ANN estimation have been determined as 0.9998, 0.9927 and 0.9895 for training, testing and for all process. The [Formula: see text] reaction cross-sections have been reproduced with the TALYS 1.95 and the EMPIRE 3.2 codes. In summary, it has been demonstrated that the ANN algorithms can be used to calculate the [Formula: see text] reaction cross-section with the semi-empirical systematics.


AIP Advances ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 025111 ◽  
Author(s):  
Divya Kaushik ◽  
Utkarsh Singh ◽  
Upasana Sahu ◽  
Indu Sreedevi ◽  
Debanjan Bhowmik

2020 ◽  
Vol 458 ◽  
pp. 124674 ◽  
Author(s):  
Yi Zhou ◽  
Rui Chen ◽  
Wenjie Chen ◽  
Rui-Pin Chen ◽  
Yungui Ma

Sign in / Sign up

Export Citation Format

Share Document