scholarly journals A Novel Improved ELM Algorithm for a Real Industrial Application

2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Hai-Gang Zhang ◽  
Sen Zhang ◽  
Yi-Xin Yin

It is well known that the feedforward neural networks meet numbers of difficulties in the applications because of its slow learning speed. The extreme learning machine (ELM) is a new single hidden layer feedforward neural network method aiming at improving the training speed. Nowadays ELM algorithm has received wide application with its good generalization performance under fast learning speed. However, there are still several problems needed to be solved in ELM. In this paper, a new improved ELM algorithm named R-ELM is proposed to handle the multicollinear problem appearing in calculation of the ELM algorithm. The proposed algorithm is employed in bearing fault detection using stator current monitoring. Simulative results show that R-ELM algorithm has better stability and generalization performance compared with the original ELM and the other neural network methods.

Author(s):  
JUNHAI ZHAI ◽  
HONGYU XU ◽  
YAN LI

Extreme learning machine (ELM) is an efficient and practical learning algorithm used for training single hidden layer feed-forward neural networks (SLFNs). ELM can provide good generalization performance at extremely fast learning speed. However, ELM suffers from instability and over-fitting, especially on relatively large datasets. Based on probabilistic SLFNs, an approach of fusion of extreme learning machine (F-ELM) with fuzzy integral is proposed in this paper. The proposed algorithm consists of three stages. Firstly, the bootstrap technique is employed to generate several subsets of original dataset. Secondly, probabilistic SLFNs are trained with ELM algorithm on each subset. Finally, the trained probabilistic SLFNs are fused with fuzzy integral. The experimental results show that the proposed approach can alleviate to some extent the problems mentioned above, and can increase the prediction accuracy.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Qian Leng ◽  
Honggang Qi ◽  
Jun Miao ◽  
Wentao Zhu ◽  
Guiping Su

One-class classification problem has been investigated thoroughly for past decades. Among one of the most effective neural network approaches for one-class classification, autoencoder has been successfully applied for many applications. However, this classifier relies on traditional learning algorithms such as backpropagation to train the network, which is quite time-consuming. To tackle the slow learning speed in autoencoder neural network, we propose a simple and efficient one-class classifier based on extreme learning machine (ELM). The essence of ELM is that the hidden layer need not be tuned and the output weights can be analytically determined, which leads to much faster learning speed. The experimental evaluation conducted on several real-world benchmarks shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one-class classification methods.


Author(s):  
Ahmed Kawther Hussein

<span id="docs-internal-guid-5c723154-7fff-a7b2-3582-b7c2920a9921"><span>Arabic calligraphy is considered a sort of Arabic writing art where letters in Arabic can be written in various curvy or segments styles. The efforts of automating the identification of Arabic calligraphy by using artificial intelligence were less comparing with other languages. Hence, this article proposes using four types of features and a single hidden layer neural network for training on Arabic calligraphy and predicting the type of calligraphy that is used. For neural networks, we compared the case of non-connected input and output layers in extreme learning machine ELM and the case of connected input-output layers in FLN. The prediction accuracy of fast learning machine FLN was superior comparing ELM that showed a variation in the obtained accuracy. </span></span>


Author(s):  
Qingsong Xu

Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks. In theory, this algorithm is able to provide good generalization capability at extremely fast learning speed. Comparative studies of benchmark function approximation problems revealed that ELM can learn thousands of times faster than conventional neural network (NN) and can produce good generalization performance in most cases. Unfortunately, the research on damage localization using ELM is limited in the literature. In this chapter, the ELM is extended to the domain of damage localization of plate structures. Its effectiveness in comparison with typical neural networks such as back-propagation neural network (BPNN) and least squares support vector machine (LSSVM) is illustrated through experimental studies. Comparative investigations in terms of learning time and localization accuracy are carried out in detail. It is shown that ELM paves a new way in the domain of plate structure health monitoring. Both advantages and disadvantages of using ELM are discussed.


2014 ◽  
Vol 1049-1050 ◽  
pp. 1292-1296
Author(s):  
Qing Feng Xia

Extreme Learning Machine-Radial Basis Function (ELM-RBF) not only inherit RBF’s merit of not suffering from local minima, but also ELM’s merit of fast learning speed, Nevertheless, it is still a research hot area of how to improve the generalization ability of ELM-RBF network. Genetic Algorithms (GA) to solve optimization problem has its unique advantage. Considered on these, the paper adopted GA to optimize ELM-RBF neural network hidden layer neurons center and biases value. Experiments data results indicated that our proposed combined algorithm has better generalization performance than classical ELM-RBF, it achieved the basic anticipated task of design.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Adnan O. M. Abuassba ◽  
Dezheng Zhang ◽  
Xiong Luo ◽  
Ahmad Shaheryar ◽  
Hazrat Ali

Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L2-norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets.


2018 ◽  
Vol 246 ◽  
pp. 03018
Author(s):  
Zuozhi Liu ◽  
JinJian Wu ◽  
Jianpeng Wang

Extreme learning machine (ELM) is a new novel learning algorithm for generalized single-hidden layer feedforward networks (SLFNs). Although it shows fast learning speed in many areas, there is still room for improvement in computational cost. To address this issue, this paper proposes an improved ELM (FRCFELM) which employs the full rank Cholesky factorization to compute output weights instead of traditional SVD. In addition, this paper proves in theory that the proposed FRCF-ELM has lower computational complexity. Experimental results over some benchmark applications indicate that the proposed FRCF-ELM learns faster than original ELM algorithm while preserving good generalization performance.


2013 ◽  
Vol 765-767 ◽  
pp. 1854-1857
Author(s):  
Feng Wang ◽  
Jin Lin Ding ◽  
Hong Sun

Neural network generalized inverse (NNGI) can realize two-motor synchronous decoupling control, but traditional neural network (NN) exists many shortcomings, Regular extreme learning machine (RELM) has fast learning and good generalization ability, which is an ideal approach to approximate inverse system. But it is difficult to accurately give the reasonable number of hidden neurons. Improved incremental RELM(IIRELM) is prospected on the basis of analyzing RELM learning algorithm, which can automatically determine optimal network structure through gradually adding new hidden-layer neurons, and prediction model based on IIRELM is applied in two-motor closed-loop control based on NNGI, the decoupling control between velocity and tension is realized. The experimental results proved that the system has excellent performance.


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 609 ◽  
Author(s):  
Fan Zhang ◽  
Jiabin Liu ◽  
Bo Wang ◽  
Zhiquan Qi ◽  
Yong Shi

Learning from label proportions (LLP) is a new kind of learning problem which has attracted wide interest in machine learning. Different from the well-known supervised learning, the training data of LLP is in the form of bags and only the proportion of each class in each bag is available. Actually, many modern applications can be successfully abstracted to this problem such as modeling voting behaviors and spam filtering. However, time-consuming training is still a challenge for LLP, which becomes a bottleneck especially when addressing large bags and bag sizes. In this paper, we propose a fast algorithm called multi-class learning from label proportions by extreme learning machine (LLP-ELM), which takes advantage of an extreme learning machine with fast learning speed to solve multi-class learning from label proportions. Firstly, we reshape the hidden layer output matrix and the training data target matrix of an extreme learning machine to adapt to the proportion information instead of the real labels. Secondly, a robust loss function with a regularization term is formulated and two efficient solutions are provided to different cases. Finally, various experiments demonstrate the significant speed-up of the proposed model with better accuracies on different datasets compared with several state-of-the-art methods.


2014 ◽  
Vol 644-650 ◽  
pp. 2407-2410
Author(s):  
Dai Yuan Zhang ◽  
Jia Kai Wang

Training neural network by spline weight function (SWF) has overcomed many defects of traditional neural networks (such as local minima, slow convergence and so on). It becomes more important because of its simply topological structure, fast learning speed and high accuracy. To generalize the SWF algorithm, this paper introduces a kind of rational spline weight function neural network and analyzes the performance of approximation for this neural network.


Sign in / Sign up

Export Citation Format

Share Document