scholarly journals Theory and Numerical Analysis of Extreme Learning Machine and Its Application for Different Degrees of Defect Recognition of Hoisting Wire Rope

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Zhike Zhao ◽  
Xiaoguang Zhang

An improved classification approach is proposed to solve the hot research problem of some complex multiclassification samples based on extreme learning machine (ELM). ELM was proposed based on the single-hidden layer feed-forward neural network (SLFNN). ELM is characterized by the easier parameter selection rules, the faster converge speed, the less human intervention, and so on. In order to further improve the classification precision of ELM, an improved generation method of the network structure of ELM is developed by dynamically adjusting the number of hidden nodes. The number change of the hidden nodes can serve as the computational updated step length of the ELM algorithm. In this paper, the improved algorithm can be called the variable step incremental extreme learning machine (VSI-ELM). In order to verify the effect of the hidden layer nodes on the performance of ELM, an open-source machine learning database (University of California, Irvine (UCI)) is provided by the performance test data sets. The regression and classification experiments are used to study the performance of the VSI-ELM model, respectively. The experimental results show that the VSI-ELM algorithm is valid. The classification of different degrees of broken wires is now still a problem in the nondestructive testing of hoisting wire rope. The magnetic flux leakage (MFL) method of wire rope is an efficient nondestructive method which plays an important role in safety evaluation. Identifying the proposed VSI-ELM model is effective and reliable for actually applying data, and it is used to identify the classification problem of different types of samples from MFL signals. The final experimental results show that the VSI-ELM algorithm is of faster classification speed and higher classification accuracy of different broken wires.

Extreme Learning Machine (ELM) is an efficient and effective least-square-based learning algorithm for classification, regression problems based on single hidden layer feed-forward neural network (SLFN). It has been shown in the literature that it has faster convergence and good generalization ability for moderate datasets. But, there is great deal of challenge involved in computing the pseudoinverse when there are large numbers of hidden nodes or for large number of instances to train complex pattern recognition problems. To address this problem, a few approaches such as EM-ELM, DF-ELM have been proposed in the literature. In this paper, a new rank-based matrix decomposition of the hidden layer matrix is introduced to have the optimal training time and reduce the computational complexity for a large number of hidden nodes in the hidden layer. The results show that it has constant training time which is closer towards the minimal training time and very far from worst-case training time of the DF-ELM algorithm that has been shown efficient in the recent literature.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Jie Lai ◽  
Xiaodan Wang ◽  
Rui Li ◽  
Yafei Song ◽  
Lei Lei

In order to prevent the overfitting and improve the generalization performance of Extreme Learning Machine (ELM), a new regularization method, Biased DropConnect, and a new regularized ELM using the Biased DropConnect and Biased Dropout (BD-ELM) are both proposed in this paper. Like the Biased Dropout to hidden nodes, the Biased DropConnect can utilize the difference of connection weights to keep more information of network after dropping. The regular Dropout and DropConnect set the connection weights and output of the hidden layer to 0 with a single fixed probability. But the Biased DropConnect and Biased Dropout divide the connection weights and hidden nodes into high and low groups by threshold, and set different groups to 0 with different probabilities. Connection weights with high value and hidden nodes with a high-activated value, which make more contribution to network performance, will be kept by a lower drop probability, while the weights and hidden nodes with a low value will be given a higher drop probability to keep the drop probability of the whole network to a fixed constant. Using Biased DropConnect and Biased Dropout regularization, in BD-ELM, the sparsity of parameters is enhanced and the structural complexity is reduced. Experiments on various benchmark datasets show that Biased DropConnect and Biased Dropout can effectively address the overfitting, and BD-ELM can provide higher classification accuracy than ELM, R-ELM, and Drop-ELM.


2016 ◽  
Vol 8 (1) ◽  
pp. 5-15
Author(s):  
Liu Yusong ◽  
Su Zhixun ◽  
Zhang Bingjie ◽  
Gong Xiaoling ◽  
Sang Zhaoyang

Abstract Extreme learning machine (ELM) is an efficient algorithm, but it requires more hidden nodes than the BP algorithms to reach the matched performance. Recently, an efficient learning algorithm, the upper-layer-solution-unaware algorithm (USUA), is proposed for the single-hidden layer feed-forward neural network. It needs less number of hidden nodes and testing time than ELM. In this paper, we mainly give the theoretical analysis for USUA. Theoretical results show that the error function monotonously decreases in the training procedure, the gradient of the error function with respect to weights tends to zero (the weak convergence), and the weight sequence goes to a fixed point (the strong convergence) when the iterations approach positive infinity. An illustrated simulation has been implemented on the MNIST database of handwritten digits which effectively verifies the theoretical results..


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Yanpeng Qu ◽  
Ansheng Deng

Many strategies have been exploited for the task of reinforcing the effectiveness and efficiency of extreme learning machine (ELM), from both methodology and structure perspectives. By activating all the hidden nodes with different degrees, local coupled extreme learning machine (LC-ELM) is capable of decoupling the link architecture between the input layer and the hidden layer in ELM. Such activated degrees are jointly determined by the associated addresses and fuzzy membership functions assigned to the hidden nodes. In order to further refine the weight searching space of LC-ELM, this paper implements an optimisation, entitled evolutionary local coupled extreme learning machine (ELC-ELM). This method makes use of the differential evolutionary (DE) algorithm to optimise the hidden node addresses and the radiuses of the fuzzy membership functions, until the qualified fitness or the maximum iteration step is reached. The efficacy of the presented work is verified through systematic simulated experimentations in both regression and classification applications. Experimental results demonstrate that the proposed technique outperforms three ELM alternatives, namely, the classical ELM, LC-ELM, and OSFuzzyELM, according to a series of reliable performances.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Fei Gao ◽  
Jiangang Lv

Single-Stage Extreme Learning Machine (SS-ELM) is presented to dispose of the mechanical fault diagnosis in this paper. Based on it, the traditional mapping type of extreme learning machine (ELM) has been changed and the eigenvectors extracted from signal processing methods are directly regarded as outputs of the network’s hidden layer. Then the uncertainty that training data transformed from the input space to the ELM feature space with the ELM mapping and problem of the selection of the hidden nodes are avoided effectively. The experiment results of diesel engine fault diagnosis show good performance of the SS-ELM algorithm.


Filomat ◽  
2020 ◽  
Vol 34 (15) ◽  
pp. 4985-4996
Author(s):  
Bolin Liao ◽  
Chuan Ma ◽  
Meiling Liao ◽  
Shuai Li ◽  
Zhiguan Huang

In this paper, a novel type of feed-forward neural network with a simple structure is proposed and investigated for pattern classification. Because the novel type of forward neural network?s parameter setting is mirrored with those of the Extreme Learning Machine (ELM), it is termed the mirror extreme learning machine (MELM). For the MELM, the input weights are determined by the pseudoinverse method analytically, while the output weights are generated randomly, which are completely different from the conventional ELM. Besides, a growing method is adopted to obtain the optimal hidden-layer structure. Finally, to evaluate the performance of the proposed MELM, abundant comparative experiments based on different real-world classification datasets are performed. Experimental results validate the high classification accuracy and good generalization performance of the proposed neural network with a simple structure in pattern classification.


2012 ◽  
Vol 608-609 ◽  
pp. 564-568 ◽  
Author(s):  
Yi Hui Zhang ◽  
He Wang ◽  
Zhi Jian Hu ◽  
Meng Lin Zhang ◽  
Xiao Lu Gong ◽  
...  

Extreme learning machine (ELM) is a new and effective single-hidden layer feed forward neural network learning algorithm. Extreme learning machine only needs to set the number of hidden layer nodes of the network, and there is no need to adjust the neural network input weights and the hidden units bias, and it generates the only optimum solution, so it has the advantage of fast learning and good generalization ability. And the back propagation (BP) neural network is the most maturely applied. This paper has introduced the extreme learning machine into the wind power prediction. By comparing the wind power prediction method using the BP neural network. Study shows that the extreme learning machine has better prediction accuracy and shorter model training time.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Qinwei Fan ◽  
Ting Liu

Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Because of its powerful modeling ability and it needs less human intervention, the ELM algorithm has been used widely in both regression and classification experiments. However, in order to achieve required accuracy, it needs many more hidden nodes than is typically needed by the conventional neural networks. This paper considers a new efficient learning algorithm for ELM with smoothing L0 regularization. A novel algorithm updates weights in the direction along which the overall square error is reduced the most and then this new algorithm can sparse network structure very efficiently. The numerical experiments show that the ELM algorithm with smoothing L0 regularization has less hidden nodes but better generalization performance than original ELM and ELM with L1 regularization algorithms.


2021 ◽  
pp. 107482
Author(s):  
Carlos Perales-González ◽  
Francisco Fernández-Navarro ◽  
Javier Pérez-Rodríguez ◽  
Mariano Carbonero-Ruz

Sign in / Sign up

Export Citation Format

Share Document