Optimal Design Methods to Transform 3D NAND Flash into a High-Density, High-Bandwidth and Low-Power Nonvolatile Computing in Memory (nvCIM) Accelerator for Deep-Learning Neural Networks (DNN)

Author(s):  
Hang-Ting Lue ◽  
Po-Kai Hsu ◽  
Ming-Liang Wei ◽  
Teng-Hao Yeh ◽  
Pei-Ying Du ◽  
...  
2019 ◽  
Vol 7 ◽  
pp. 1085-1093 ◽  
Author(s):  
Sung-Tae Lee ◽  
Suhwan Lim ◽  
Nag Yong Choi ◽  
Jong-Ho Bae ◽  
Dongseok Kwon ◽  
...  

2020 ◽  
Vol 20 (7) ◽  
pp. 4138-4142
Author(s):  
Sung-Tae Lee ◽  
Suhwan Lim ◽  
Nagyong Choi ◽  
Jong-Ho Bae ◽  
Dongseok Kwon ◽  
...  

NAND flash memory which is mature technology has great advantage in high density and great storage capacity per chip because cells are connected in series between a bit-line and a source-line. Therefore, NAND flash cell can be used as a synaptic device which is very useful for a high-density synaptic array. In this paper, the effect of the word-line bias on the linearity of multi-level conductance steps of the NAND flash cell is investigated. A 3-layer perceptron network (784×200×10) is trained by a suitable weight update method for NAND flash memory using MNIST data set. The linearity of multi-level conductance steps is improved as the word line bias increases from Vth −0.5 to Vth +1 at a fixed bit-line bias of 0.2 V. As a result, the learning accuracy is improved as the word-line bias increases from Vth −0.5 to Vth+1.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 114330-114339
Author(s):  
Sung-Tae Lee ◽  
Dongseok Kwon ◽  
Hyeongsu Kim ◽  
Honam Yoo ◽  
Jong-Ho Lee

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sign in / Sign up

Export Citation Format

Share Document