Ultra-low power Hf0.5Zr0.5O2 based ferroelectric tunnel junction synapses for hardware neural network applications

Nanoscale ◽  
2018 ◽  
Vol 10 (33) ◽  
pp. 15826-15833 ◽  
Author(s):  
Lin Chen ◽  
Tian-Yu Wang ◽  
Ya-Wei Dai ◽  
Ming-Yang Cha ◽  
Hao Zhu ◽  
...  

Brain-inspired neuromorphic computing has shown great promise beyond the conventional Boolean logic.

2021 ◽  
Vol 17 (2) ◽  
pp. 1-27
Author(s):  
Adi Eliahu ◽  
Ronny Ronen ◽  
Pierre-Emmanuel Gaillardon ◽  
Shahar Kvatinsky

Computationally intensive neural network applications often need to run on resource-limited low-power devices. Numerous hardware accelerators have been developed to speed up the performance of neural network applications and reduce power consumption; however, most focus on data centers and full-fledged systems. Acceleration in ultra-low-power systems has been only partially addressed. In this article, we present multiPULPly, an accelerator that integrates memristive technologies within standard low-power CMOS technology, to accelerate multiplication in neural network inference on ultra-low-power systems. This accelerator was designated for PULP, an open-source microcontroller system that uses low-power RISC-V processors. Memristors were integrated into the accelerator to enable power consumption only when the memory is active, to continue the task with no context-restoring overhead, and to enable highly parallel analog multiplication. To reduce the energy consumption, we propose novel dataflows that handle common multiplication scenarios and are tailored for our architecture. The accelerator was tested on FPGA and achieved a peak energy efficiency of 19.5 TOPS/W, outperforming state-of-the-art accelerators by 1.5× to 4.5×.


2016 ◽  
Vol 27 (36) ◽  
pp. 365204 ◽  
Author(s):  
I-Ting Wang ◽  
Chih-Cheng Chang ◽  
Li-Wen Chiu ◽  
Teyuh Chou ◽  
Tuo-Hung Hou

Sign in / Sign up

Export Citation Format

Share Document