A Heterogeneous Stochastic Computing Multiplier for Universally Accurate and Energy-Efficient DNNs

Author(s):  
Jihe Wang ◽  
Hao Chen ◽  
Danghui Wang ◽  
Kuizhi Mei ◽  
Shengbing Zhang ◽  
...  
Author(s):  
Danghui Wang ◽  
Zhaoqing Wang ◽  
Linfan Yu ◽  
Ying Wu ◽  
Jiaqi Yang ◽  
...  

2022 ◽  
Vol 18 (2) ◽  
pp. 1-25
Author(s):  
Saransh Gupta ◽  
Mohsen Imani ◽  
Joonseop Sim ◽  
Andrew Huang ◽  
Fan Wu ◽  
...  

Stochastic computing (SC) reduces the complexity of computation by representing numbers with long streams of independent bits. However, increasing performance in SC comes with either an increase in area or a loss in accuracy. Processing in memory (PIM) computes data in-place while having high memory density and supporting bit-parallel operations with low energy consumption. In this article, we propose COSMO, an architecture for co mputing with s tochastic numbers in me mo ry, which enables SC in memory. The proposed architecture is general and can be used for a wide range of applications. It is a highly dense and parallel architecture that supports most SC encodings and operations in memory. It maximizes the performance and energy efficiency of SC by introducing several innovations: (i) in-memory parallel stochastic number generation, (ii) efficient implication-based logic in memory, (iii) novel memory bit line segmenting, (iv) a new memory-compatible SC addition operation, and (v) enabling flexible block allocation. To show the generality and efficiency of our stochastic architecture, we implement image processing, deep neural networks (DNNs), and hyperdimensional (HD) computing on the proposed hardware. Our evaluations show that running DNN inference on COSMO is 141× faster and 80× more energy efficient as compared to GPU.


2022 ◽  
Vol 15 ◽  
Author(s):  
Vivek Parmar ◽  
Bogdan Penkovsky ◽  
Damien Querlioz ◽  
Manan Suri

With recent advances in the field of artificial intelligence (AI) such as binarized neural networks (BNNs), a wide variety of vision applications with energy-optimized implementations have become possible at the edge. Such networks have the first layer implemented with high precision, which poses a challenge in deploying a uniform hardware mapping for the network implementation. Stochastic computing can allow conversion of such high-precision computations to a sequence of binarized operations while maintaining equivalent accuracy. In this work, we propose a fully binarized hardware-friendly computation engine based on stochastic computing as a proof of concept for vision applications involving multi-channel inputs. Stochastic sampling is performed by sampling from a non-uniform (normal) distribution based on analog hardware sources. We first validate the benefits of the proposed pipeline on the CIFAR-10 dataset. To further demonstrate its application for real-world scenarios, we present a case-study of microscopy image diagnostics for pathogen detection. We then evaluate benefits of implementing such a pipeline using OxRAM-based circuits for stochastic sampling as well as in-memory computing-based binarized multiplication. The proposed implementation is about 1,000 times more energy efficient compared to conventional floating-precision-based digital implementations, with memory savings of a factor of 45.


2017 ◽  
Vol 7 (4) ◽  
pp. 29 ◽  
Author(s):  
Ramu Seva ◽  
Prashanthi Metku ◽  
Minsu Choi

2020 ◽  
Vol 13 (3) ◽  
Author(s):  
Matthew W. Daniels ◽  
Advait Madhavan ◽  
Philippe Talatchian ◽  
Alice Mizrahi ◽  
Mark D. Stiles

Sign in / Sign up

Export Citation Format

Share Document