WISER: Deep Neural Network Weight-bit Inversion for State Error Reduction in MLC NAND Flash

Author(s):  
Jaehun Jang ◽  
Jong Hwan Ko
2021 ◽  
pp. 1-1
Author(s):  
Myeonggu Kang ◽  
Hyeonuk Kim ◽  
Hyein Shin ◽  
Jaehyeong Sim ◽  
Kyeonghan Kim ◽  
...  

Author(s):  
Gianni Franchi ◽  
Andrei Bursuc ◽  
Emanuel Aldea ◽  
Séverine Dubuisson ◽  
Isabelle Bloch

2017 ◽  
Vol 40 (6) ◽  
pp. 1741-1745 ◽  
Author(s):  
Chris JB Macnab

This paper points out problems in a paper which appears in the Transactions of the Institute of Measurement and Control entitled “An intelligent CMAC-PD torque controller with anti-over-learning scheme for electric load simulator” by Bo Yang, Huatao Han and Ran Bao (Vol. 39, No. 2, pp.192–200, 2016). Their proposed neural-network weight update makes no intuitive sense: it introduces a term that keeps the output of the neural network close to its input. Here, a standard linear analysis shows that their proposed update applied to adaptive parameters will result in a large steady-state error in general; however for their machine a low steady state error results only because the ideal numerical value of the control signal in Volts happens to be close to the numerical value of the desired input signal in Newton-meters. Furthermore, the authors claim their weight update prevents overlearning, but do not conduct a Lyapunov analysis or even graph a measure of their weights in the results section. This paper shows that a standard Lyapunov analysis (which establishes uniformly ultimately bounded signals for traditional robust update modifications like leakage) fails to reveal a bound on signals for the proposed method. Moreover, simulations demonstrate weight growth that continues at a linear rate during a long simulation when using the proposed method.


Sign in / Sign up

Export Citation Format

Share Document