scholarly journals Pruning Multilayered ELM Using Cholesky Factorization and Givens Rotation Transformation

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Jingyi Liu ◽  
Xinxin Liu ◽  
Chongmin Liu ◽  
Ba Tuan Le ◽  
Dong Xiao

Extreme learning machine is originally proposed for the learning of the single hidden layer feedforward neural network to overcome the challenges faced by the backpropagation (BP) learning algorithm and its variants. Recent studies show that ELM can be extended to the multilayered feedforward neural network in which the hidden node could be a subnetwork of nodes or a combination of other hidden nodes. Although the ELM algorithm with multiple hidden layers shows stronger nonlinear expression ability and stability in both theoretical and experimental results than the ELM algorithm with the single hidden layer, with the deepening of the network structure, the problem of parameter optimization is also highlighted, which usually requires more time for model selection and increases the computational complexity. This paper uses Cholesky factorization strategy and Givens rotation transformation to choose the hidden nodes of MELM and obtains the number of nodes more suitable for the network. First, the initial network has a large number of hidden nodes and then uses the idea of ridge regression to prune the nodes. Finally, a complete neural network can be obtained. Therefore, the ELM algorithm eliminates the need to manually set nodes and achieves complete automation. By using information from the previous generation’s connection weight matrix, it can be evitable to re-calculate the weight matrix in the network simplification process. As in the matrix factorization methods, the Cholesky factorization factor is calculated by Givens rotation transform to achieve the fast decreasing update of the current connection weight matrix, thus ensuring the numerical stability and high efficiency of the pruning process. Empirical studies on several commonly used classification benchmark problems and the real datasets collected from coal industry show that compared with the traditional ELM algorithm, the pruning multilayered ELM algorithm proposed in this paper can find the optimal number of hidden nodes automatically and has better generalization performance.

2011 ◽  
Vol 403-408 ◽  
pp. 3867-3874 ◽  
Author(s):  
Sudhir Kumar Sharma ◽  
Pravin Chandra

In this paper we propose a constructive algorithm with adaptive sigmoidal function for designing single hidden layer feedforward neural network (CAASF). The proposed algorithm emphasizes on architectural adaptation and functional adaptation during training. This algorithm is a constructive approach to building single hidden layer neural networks dynamically. The activation functions used at non-linear hidden nodes are belonging to the well-defined sigmoidal class and adapted during training. The algorithm determines not only optimum number of hidden nodes, as also optimum sigmoidal function for the non-linear nodes. One simple variant derived from CAASF is where the sigmoidal function used at the hidden nodes is fixed. Both the variants are compared to each other on five regression functions. Simulation results reveal that adaptive sigmoidal function presents several advantages over traditional fixed sigmoid function, resulting in increased flexibility, smoother learning, better convergence and better generalization performance.


2019 ◽  
Vol 13 ◽  
pp. 302-309
Author(s):  
Jakub Basiakowski

The following paper presents the results of research on the impact of machine learning in the construction of a voice-controlled interface. Two different models were used for the analysys: a feedforward neural network containing one hidden layer and a more complicated convolutional neural network. What is more, a comparison of the applied models was presented. This comparison was performed in terms of quality and the course of training.


2019 ◽  
Vol 10 (37) ◽  
pp. 31-44
Author(s):  
Engin Kandıran ◽  
Avadis Hacınlıyan

Artificial neural networks are commonly accepted as a very successful tool for global function approximation. Because of this reason, they are considered as a good approach to forecasting chaotic time series in many studies. For a given time series, the Lyapunov exponent is a good parameter to characterize the series as chaotic or not. In this study, we use three different neural network architectures to test capabilities of the neural network in forecasting time series generated from different dynamical systems. In addition to forecasting time series, using the feedforward neural network with single hidden layer, Lyapunov exponents of the studied systems are forecasted.


1995 ◽  
Vol 117 (3) ◽  
pp. 411-415 ◽  
Author(s):  
C. James Li ◽  
Taehee Kim

A fully automatic feedforward neural network structural and weight learning algorithm is described. The Augmentation by Training with Residuals, ATR, requires neither guess of initial weight values nor the number of neurons in the hidden layer from users. The algorithm takes an incremental approach in which a hidden neuron is trained to model the mapping between the input and output of current exemplars, and is augmented to the existing network. The exemplars are then made orthogonal to the newly identified hidden neuron and used for the training of next hidden neuron. The improvement continues until a desired accuracy is reached. This new structural and weight learning algorithm is applied to the identification of a two-degree-of-freedom planar robot, a Van der Pol oscillator and a Mackay-Glass equation. The algorithm is shown to be effective in modeling all three systems and is far superior to a linear modeling scheme in the case of the robot.


2019 ◽  
Vol 33 (01n03) ◽  
pp. 1940034 ◽  
Author(s):  
Li Hongyu ◽  
Chen Hui ◽  
Wu Ying ◽  
Chen Yong ◽  
Yi Wei

The two-dimensional morphology of the cladding layer has an important influence on the quality of the cladding layer and the crack tendency. Using the powerful nonlinear processing ability of the single hidden layer feedforward neural network, a prediction model between the cladding technological parameters and the two-dimensional morphology of the cladding layer is established. Taking the cladding parameters as the input and the two-dimensional morphology of the cladding as the output, the experimental data is used to train the network to achieve a high-level mapping of the input and output. On this basis, the algorithm of extreme learning machine is used to optimize the single hidden layer feedforward neural network to overcome the problems of slow convergence speed, more network training parameters and easy local convergence in back-propagation algorithm. The results show that the relationship between the cladding process parameters and the two-dimensional morphology of the cladding layer can be roughly reflected by the back-propagation algorithm. However, the prediction results are not stable and the error rate is between 10% and 40%. The neural network optimized by the extreme learning machine is utilized to get a better prediction result. The error rate is 10–20%.


Sign in / Sign up

Export Citation Format

Share Document