Cascading Neural Networks Using Adaptive Sigmoidal Function

2011 ◽  
Vol 403-408 ◽  
pp. 858-865
Author(s):  
Sudhir Kumar Sharma ◽  
Pravin Chandra

This paper presents cascading neural networks using adaptive sigmoidal function (CNNASF). The proposed algorithm emphasizes on architectural adaptation and functional adaptation during training. This algorithm is a constructive approach to building cascading architecture dynamically. The activation functions used at the hidden layers’ node are belonging to the well-defined sigmoidal class and adapted during training. The algorithm determines not only optimum number of hidden layers’ node, as also optimum sigmoidal function for them. One simple variant derived from CNNASF is where the sigmoid function used at the hidden layers’ node is fixed. Both the variants are compared to each other on five regression functions. Simulation results reveal that adaptive sigmoidal function presents several advantages over traditional fixed sigmoid function, resulting in increased flexibility, smoother learning, better convergence and better generalization performance.

2011 ◽  
Vol 403-408 ◽  
pp. 3867-3874 ◽  
Author(s):  
Sudhir Kumar Sharma ◽  
Pravin Chandra

In this paper we propose a constructive algorithm with adaptive sigmoidal function for designing single hidden layer feedforward neural network (CAASF). The proposed algorithm emphasizes on architectural adaptation and functional adaptation during training. This algorithm is a constructive approach to building single hidden layer neural networks dynamically. The activation functions used at non-linear hidden nodes are belonging to the well-defined sigmoidal class and adapted during training. The algorithm determines not only optimum number of hidden nodes, as also optimum sigmoidal function for the non-linear nodes. One simple variant derived from CAASF is where the sigmoidal function used at the hidden nodes is fixed. Both the variants are compared to each other on five regression functions. Simulation results reveal that adaptive sigmoidal function presents several advantages over traditional fixed sigmoid function, resulting in increased flexibility, smoother learning, better convergence and better generalization performance.


Author(s):  
Xiaoyang Liu ◽  
Zhigang Zeng

AbstractThe paper presents memristor crossbar architectures for implementing layers in deep neural networks, including the fully connected layer, the convolutional layer, and the pooling layer. The crossbars achieve positive and negative weight values and approximately realize various nonlinear activation functions. Then the layers constructed by the crossbars are adopted to build the memristor-based multi-layer neural network (MMNN) and the memristor-based convolutional neural network (MCNN). Two kinds of in-situ weight update schemes, which are the fixed-voltage update and the approximately linear update, respectively, are used to train the networks. Consider variations resulted from the inherent characteristics of memristors and the errors of programming voltages, the robustness of MMNN and MCNN to these variations is analyzed. The simulation results on standard datasets show that deep neural networks (DNNs) built by the memristor crossbars work satisfactorily in pattern recognition tasks and have certain robustness to memristor variations.


2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


2021 ◽  
Vol 11 (10) ◽  
pp. 4440
Author(s):  
Youheng Tan ◽  
Xiaojun Jing

Cooperative spectrum sensing (CSS) is an important topic due to its capacity to solve the issue of the hidden terminal. However, the sensing performance of CSS is still poor, especially in low signal-to-noise ratio (SNR) situations. In this paper, convolutional neural networks (CNN) are considered to extract the features of the observed signal and, as a consequence, improve the sensing performance. More specifically, a novel two-dimensional dataset of the received signal is established and three classical CNN (LeNet, AlexNet and VGG-16)-based CSS schemes are trained and analyzed on the proposed dataset. In addition, sensing performance comparisons are made between the proposed CNN-based CSS schemes and the AND, OR, majority voting-based CSS schemes. The simulation results state that the sensing accuracy of the proposed schemes is greatly improved and the network depth helps with this.


1991 ◽  
Vol 02 (04) ◽  
pp. 331-339 ◽  
Author(s):  
Jiahan Chen ◽  
Michael A. Shanblatt ◽  
Chia-Yiu Maa

A method for improving the performance of artificial neural networks for linear and nonlinear programming is presented. By analyzing the behavior of the conventional penalty function, the reason for the inherent degenerating accuracy is discovered. Based on this, a new combination penalty function is proposed which can ensure that the equilibrium point is acceptably close to the optimal point. A known neural network model has been modified by using the new penalty function and the corresponding circuit scheme is given. Simulation results show that the relative error for linear and nonlinear programming is substantially reduced by the new method.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 854
Author(s):  
Nevena Rankovic ◽  
Dragica Rankovic ◽  
Mirjana Ivanovic ◽  
Ljubomir Lazic

Software estimation involves meeting a huge number of different requirements, such as resource allocation, cost estimation, effort estimation, time estimation, and the changing demands of software product customers. Numerous estimation models try to solve these problems. In our experiment, a clustering method of input values to mitigate the heterogeneous nature of selected projects was used. Additionally, homogeneity of the data was achieved with the fuzzification method, and we proposed two different activation functions inside a hidden layer, during the construction of artificial neural networks (ANNs). In this research, we present an experiment that uses two different architectures of ANNs, based on Taguchi’s orthogonal vector plans, to satisfy the set conditions, with additional methods and criteria for validation of the proposed model, in this approach. The aim of this paper is the comparative analysis of the obtained results of mean magnitude relative error (MMRE) values. At the same time, our goal is also to find a relatively simple architecture that minimizes the error value while covering a wide range of different software projects. For this purpose, six different datasets are divided into four chosen clusters. The obtained results show that the estimation of diverse projects by dividing them into clusters can contribute to an efficient, reliable, and accurate software product assessment. The contribution of this paper is in the discovered solution that enables the execution of a small number of iterations, which reduces the execution time and achieves the minimum error.


Sign in / Sign up

Export Citation Format

Share Document