Keys to hardware implementation

2021 ◽  
pp. 205-217
Author(s):  
Shigeyuki Takano
2015 ◽  
Vol 135 (11) ◽  
pp. 1299-1306
Author(s):  
Genki Moriguchi ◽  
Takashi Kambe ◽  
Gen Fujita ◽  
Hajime Sawano

2015 ◽  
Vol 1 (3) ◽  
pp. 4 ◽  
Author(s):  
Prof.Vipul Patel ◽  
Prof. Sanjay Patel ◽  
Nikunj Patel ◽  
Prof.Sanjay Prajapati

Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


Sign in / Sign up

Export Citation Format

Share Document