scholarly journals Improving Classification Performance through an Advanced Ensemble Based Heterogeneous Extreme Learning Machines

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Adnan O. M. Abuassba ◽  
Dezheng Zhang ◽  
Xiong Luo ◽  
Ahmad Shaheryar ◽  
Hazrat Ali

Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L2-norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets.

Author(s):  
Adnan Omer Abuassba ◽  
Dezheng Zhang ◽  
Xiong Luo

Extreme learning machine (ELM) is an effective learning algorithm for the single hidden layer feed-forward neural network (SLFN). It is diversified in the form of kernels or feature mapping functions, while achieving a good learning performance. It is agile in learning and often has good performance, including kernel ELM and Regularized ELM. Dealing with imbalanced data has been a long-term focus for the learning algorithms to achieve satisfactory analytical results. It is obvious that the unbalanced class distribution imposes very challenging obstacles to implement learning tasks in real-world applications, including online visual tracking and image quality assessment. This article addresses this issue through advanced diverse AdaBoost based ELM ensemble (AELME) for imbalanced binary and multiclass data classification. This article aims to improve classification accuracy of the imbalanced data. In the proposed method, the ensemble is developed while splitting the trained data into corresponding subsets. And different algorithms of enhanced ELM, including regularized ELM and kernel ELM, are used as base learners, so that an active learner is constructed from a group of relatively weak base learners. Furthermore, AELME is implemented by training a randomly selected ELM classifier on a subset, chosen by random re-sampling. Then, the labels of unseen data could be predicted using the weighting approach. AELME is validated through classification on real-world benchmark datasets.


Author(s):  
JUNHAI ZHAI ◽  
HONGYU XU ◽  
YAN LI

Extreme learning machine (ELM) is an efficient and practical learning algorithm used for training single hidden layer feed-forward neural networks (SLFNs). ELM can provide good generalization performance at extremely fast learning speed. However, ELM suffers from instability and over-fitting, especially on relatively large datasets. Based on probabilistic SLFNs, an approach of fusion of extreme learning machine (F-ELM) with fuzzy integral is proposed in this paper. The proposed algorithm consists of three stages. Firstly, the bootstrap technique is employed to generate several subsets of original dataset. Secondly, probabilistic SLFNs are trained with ELM algorithm on each subset. Finally, the trained probabilistic SLFNs are fused with fuzzy integral. The experimental results show that the proposed approach can alleviate to some extent the problems mentioned above, and can increase the prediction accuracy.


Author(s):  
Qingsong Xu

Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks. In theory, this algorithm is able to provide good generalization capability at extremely fast learning speed. Comparative studies of benchmark function approximation problems revealed that ELM can learn thousands of times faster than conventional neural network (NN) and can produce good generalization performance in most cases. Unfortunately, the research on damage localization using ELM is limited in the literature. In this chapter, the ELM is extended to the domain of damage localization of plate structures. Its effectiveness in comparison with typical neural networks such as back-propagation neural network (BPNN) and least squares support vector machine (LSSVM) is illustrated through experimental studies. Comparative investigations in terms of learning time and localization accuracy are carried out in detail. It is shown that ELM paves a new way in the domain of plate structure health monitoring. Both advantages and disadvantages of using ELM are discussed.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Hai-Gang Zhang ◽  
Sen Zhang ◽  
Yi-Xin Yin

It is well known that the feedforward neural networks meet numbers of difficulties in the applications because of its slow learning speed. The extreme learning machine (ELM) is a new single hidden layer feedforward neural network method aiming at improving the training speed. Nowadays ELM algorithm has received wide application with its good generalization performance under fast learning speed. However, there are still several problems needed to be solved in ELM. In this paper, a new improved ELM algorithm named R-ELM is proposed to handle the multicollinear problem appearing in calculation of the ELM algorithm. The proposed algorithm is employed in bearing fault detection using stator current monitoring. Simulative results show that R-ELM algorithm has better stability and generalization performance compared with the original ELM and the other neural network methods.


2018 ◽  
Vol 246 ◽  
pp. 03018
Author(s):  
Zuozhi Liu ◽  
JinJian Wu ◽  
Jianpeng Wang

Extreme learning machine (ELM) is a new novel learning algorithm for generalized single-hidden layer feedforward networks (SLFNs). Although it shows fast learning speed in many areas, there is still room for improvement in computational cost. To address this issue, this paper proposes an improved ELM (FRCFELM) which employs the full rank Cholesky factorization to compute output weights instead of traditional SVD. In addition, this paper proves in theory that the proposed FRCF-ELM has lower computational complexity. Experimental results over some benchmark applications indicate that the proposed FRCF-ELM learns faster than original ELM algorithm while preserving good generalization performance.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Imen Jammoussi ◽  
Mounir Ben Nasr

Extreme learning machine is a fast learning algorithm for single hidden layer feedforward neural network. However, an improper number of hidden neurons and random parameters have a great effect on the performance of the extreme learning machine. In order to select a suitable number of hidden neurons, this paper proposes a novel hybrid learning based on a two-step process. First, the parameters of hidden layer are adjusted by a self-organized learning algorithm. Next, the weights matrix of the output layer is determined using the Moore–Penrose inverse method. Nine classification datasets are considered to demonstrate the efficiency of the proposed approach compared with original extreme learning machine, Tikhonov regularization optimally pruned extreme learning machine, and backpropagation algorithms. The results show that the proposed method is fast and produces better accuracy and generalization performances.


2013 ◽  
Vol 765-767 ◽  
pp. 1854-1857
Author(s):  
Feng Wang ◽  
Jin Lin Ding ◽  
Hong Sun

Neural network generalized inverse (NNGI) can realize two-motor synchronous decoupling control, but traditional neural network (NN) exists many shortcomings, Regular extreme learning machine (RELM) has fast learning and good generalization ability, which is an ideal approach to approximate inverse system. But it is difficult to accurately give the reasonable number of hidden neurons. Improved incremental RELM(IIRELM) is prospected on the basis of analyzing RELM learning algorithm, which can automatically determine optimal network structure through gradually adding new hidden-layer neurons, and prediction model based on IIRELM is applied in two-motor closed-loop control based on NNGI, the decoupling control between velocity and tension is realized. The experimental results proved that the system has excellent performance.


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 609 ◽  
Author(s):  
Fan Zhang ◽  
Jiabin Liu ◽  
Bo Wang ◽  
Zhiquan Qi ◽  
Yong Shi

Learning from label proportions (LLP) is a new kind of learning problem which has attracted wide interest in machine learning. Different from the well-known supervised learning, the training data of LLP is in the form of bags and only the proportion of each class in each bag is available. Actually, many modern applications can be successfully abstracted to this problem such as modeling voting behaviors and spam filtering. However, time-consuming training is still a challenge for LLP, which becomes a bottleneck especially when addressing large bags and bag sizes. In this paper, we propose a fast algorithm called multi-class learning from label proportions by extreme learning machine (LLP-ELM), which takes advantage of an extreme learning machine with fast learning speed to solve multi-class learning from label proportions. Firstly, we reshape the hidden layer output matrix and the training data target matrix of an extreme learning machine to adapt to the proportion information instead of the real labels. Secondly, a robust loss function with a regularization term is formulated and two efficient solutions are provided to different cases. Finally, various experiments demonstrate the significant speed-up of the proposed model with better accuracies on different datasets compared with several state-of-the-art methods.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 830
Author(s):  
Seokho Kang

k-nearest neighbor (kNN) is a widely used learning algorithm for supervised learning tasks. In practice, the main challenge when using kNN is its high sensitivity to its hyperparameter setting, including the number of nearest neighbors k, the distance function, and the weighting function. To improve the robustness to hyperparameters, this study presents a novel kNN learning method based on a graph neural network, named kNNGNN. Given training data, the method learns a task-specific kNN rule in an end-to-end fashion by means of a graph neural network that takes the kNN graph of an instance to predict the label of the instance. The distance and weighting functions are implicitly embedded within the graph neural network. For a query instance, the prediction is obtained by performing a kNN search from the training data to create a kNN graph and passing it through the graph neural network. The effectiveness of the proposed method is demonstrated using various benchmark datasets for classification and regression tasks.


2014 ◽  
Vol 989-994 ◽  
pp. 3679-3682 ◽  
Author(s):  
Meng Meng Ma ◽  
Bo He

Extreme learning machine (ELM), a relatively novel machine learning algorithm for single hidden layer feed-forward neural networks (SLFNs), has been shown competitive performance in simple structure and superior training speed. To improve the effectiveness of ELM for dealing with noisy datasets, a deep structure of ELM, short for DS-ELM, is proposed in this paper. DS-ELM contains three level networks (actually contains three nets ): the first level network is trained by auto-associative neural network (AANN) aim to filter out noise as well as reduce dimension when necessary; the second level network is another AANN net aim to fix the input weights and bias of ELM; and the last level network is ELM. Experiments on four noisy datasets are carried out to examine the new proposed DS-ELM algorithm. And the results show that DS-ELM has higher performance than ELM when dealing with noisy data.


Sign in / Sign up

Export Citation Format

Share Document