Erratum: Simple sigmoid-like activation function suitable for digital hardware implementation

1992 ◽  
Vol 28 (19) ◽  
pp. 1852
Author(s):  
H.K. Kwan
Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


Author(s):  
Ildar Batyrshin ◽  
Antonio Hernández Zavala ◽  
Oscar Camacho Nieto ◽  
Luis Villa Vargas

Author(s):  
R. Caponetto ◽  
G. Dongola ◽  
A. Gallo

In this paper the fractional order integrative operator s−m, where m is a real positive number, is approximated via a mathematical formula and then an hardware implementation of fractional integral operator is proposed using Field Programmable Gate Array (FPGA). Digital hardware implementation of fractional-order integral operator requires careful consideration of issue of system performance, hardware cost, and hardware speed. FPGA-based implementation are up to one hundred times faster than implementations based on micro-processors; this extra speed can be exploited to allow higher performance in terms of digital approximations of fractional-order systems.


1997 ◽  
Vol 9 (5) ◽  
pp. 1109-1126
Author(s):  
Zhiyu Tian ◽  
Ting-Ting Y. Lin ◽  
Shiyuan Yang ◽  
Shibai Tong

With the progress in hardware implementation of artificial neural networks, the ability to analyze their faulty behavior has become increasingly important to their diagnosis, repair, reconfiguration, and reliable application. The behavior of feedforward neural networks with hard limiting activation function under stuck-at faults is studied in this article. It is shown that the stuck-at-M faults have a larger effect on the network's performance than the mixed stuck-at faults, which in turn have a larger effect than that of stuck-at-0 faults. Furthermore, the fault-tolerant ability of the network decreases with the increase of its size for the same percentage of faulty interconnections. The results of our analysis are validated by Monte-Carlo simulations.


Sign in / Sign up

Export Citation Format

Share Document