Overview of Echo State Networks using Different Reservoirs and Activation Functions

Author(s):  
Dan-Andrei Margin ◽  
Virgil Dobrota
2018 ◽  
Vol 6 (3) ◽  
pp. 122-126
Author(s):  
Mohammed Ibrahim Khan ◽  
◽  
Akansha Singh ◽  
Anand Handa ◽  
◽  
...  

2012 ◽  
Vol 10 (3) ◽  
pp. 181-191 ◽  
Author(s):  
Hugo Valadares Siqueira ◽  
Levy Boccato ◽  
Romis Attux ◽  
Christiano Lyra Filho

2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


Author(s):  
Patrick Knöbelreiter ◽  
Thomas Pock

AbstractIn this work, we propose a learning-based method to denoise and refine disparity maps. The proposed variational network arises naturally from unrolling the iterates of a proximal gradient method applied to a variational energy defined in a joint disparity, color, and confidence image space. Our method allows to learn a robust collaborative regularizer leveraging the joint statistics of the color image, the confidence map and the disparity map. Due to the variational structure of our method, the individual steps can be easily visualized, thus enabling interpretability of the method. We can therefore provide interesting insights into how our method refines and denoises disparity maps. To this end, we can visualize and interpret the learned filters and activation functions and prove the increased reliability of the predicted pixel-wise confidence maps. Furthermore, the optimization based structure of our refinement module allows us to compute eigen disparity maps, which reveal structural properties of our refinement module. The efficiency of our method is demonstrated on the publicly available stereo benchmarks Middlebury 2014 and Kitti 2015.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Dan Hou ◽  
Ling Li ◽  
Tengfei Ma ◽  
Jialong Pei ◽  
Zhongyu Zhao ◽  
...  

AbstractBamboo is known for its edible shoots and beautiful texture and has considerable economic and ornamental value. Unique among traditional flowering plants, many bamboo plants undergo extensive synchronized flowering followed by large-scale death, seriously affecting the productivity and application of bamboo forests. To date, the molecular mechanism of bamboo flowering characteristics has remained unknown. In this study, a SUPPRESSOR OF OVEREXPRESSION OF CONSTANS1 (SOC1)-like gene, BoMADS50, was identified from Bambusa oldhamii. BoMADS50 was highly expressed in mature leaves and the floral primordium formation period during B. oldhamii flowering and overexpression of BoMADS50 caused early flowering in transgenic rice. Moreover, BoMADS50 could interact with APETALA1/FRUITFULL (AP1/FUL)-like proteins (BoMADS14-1/2, BoMADS15-1/2) in vivo, and the expression of BoMADS50 was significantly promoted by BoMADS14-1, further indicating a synergistic effect between BoMADS50 and BoAP1/FUL-like proteins in regulating B. oldhamii flowering. We also identified four additional transcripts of BoMADS50 (BoMADS50-1/2/3/4) with different nucleotide variations. Although the protein-CDS were polymorphic, they had flowering activation functions similar to those of BoMADS50. Yeast one-hybrid and transient expression assays subsequently showed that both BoMADS50 and BoMADS50-1 bind to the promoter fragment of itself and the SHORT VEGETATIVE PHASE (SVP)-like gene BoSVP, but only BoMADS50-1 can positively induce their transcription. Therefore, nucleotide variations likely endow BoMADS50-1 with strong regulatory activity. Thus, BoMADS50 and BoMADS50-1/2/3/4 are probably important positive flowering regulators in B. oldhamii. Moreover, the functional conservatism and specificity of BoMADS50 and BoMADS50-1 might be related to the synchronized and sporadic flowering characteristics of B. oldhamii.


Sign in / Sign up

Export Citation Format

Share Document