Compact yet efficient hardware implementation of artificial neural networks with customized topology

2012 ◽  
Vol 39 (10) ◽  
pp. 9191-9206 ◽  
Author(s):  
Nadia Nedjah ◽  
Rodrigo Martins da Silva ◽  
Luiza de Macedo Mourelle
1997 ◽  
Vol 9 (5) ◽  
pp. 1109-1126
Author(s):  
Zhiyu Tian ◽  
Ting-Ting Y. Lin ◽  
Shiyuan Yang ◽  
Shibai Tong

With the progress in hardware implementation of artificial neural networks, the ability to analyze their faulty behavior has become increasingly important to their diagnosis, repair, reconfiguration, and reliable application. The behavior of feedforward neural networks with hard limiting activation function under stuck-at faults is studied in this article. It is shown that the stuck-at-M faults have a larger effect on the network's performance than the mixed stuck-at faults, which in turn have a larger effect than that of stuck-at-0 faults. Furthermore, the fault-tolerant ability of the network decreases with the increase of its size for the same percentage of faulty interconnections. The results of our analysis are validated by Monte-Carlo simulations.


2020 ◽  
Vol 53 (2) ◽  
pp. 7813-7818
Author(s):  
Rafael Koji Vatanabe Brunello ◽  
Renato Coral Sampaio ◽  
Carlos H Llanos ◽  
Leandro dos Santos Coelho ◽  
Helon Vicente Hultmann Ayala

Sign in / Sign up

Export Citation Format

Share Document