EVOLVING EXTREME LEARNING MACHINE PARADIGM WITH ADAPTIVE OPERATOR SELECTION AND PARAMETER CONTROL

Author(s):  
KE LI ◽  
RAN WANG ◽  
SAM KWONG ◽  
JINGJING CAO

Extreme Learning Machine (ELM) is an emergent technique for training Single-hidden Layer Feedforward Networks (SLFNs). It attracts significant interest during the recent years, but the randomly assigned network parameters might cause high learning risks. This fact motivates our idea in this paper to propose an evolving ELM paradigm for classification problems. In this paradigm, a Differential Evolution (DE) variant, which can online select the appropriate operator for offspring generation and adaptively adjust the corresponding control parameters, is proposed for optimizing the network. In addition, a 5-fold cross validation is adopted in the fitness assignment procedure, for improving the generalization capability. Empirical studies on several real-world classification data sets have demonstrated that the evolving ELM paradigm can generally outperform the original ELM as well as several recent classification algorithms.

Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1284
Author(s):  
Licheng Cui ◽  
Huawei Zhai ◽  
Hongfei Lin

An extreme learning machine (ELM) is an innovative algorithm for the single hidden layer feed-forward neural networks and, essentially, only exists to find the optimal output weight so as to minimize output error based on the least squares regression from the hidden layer to the output layer. With a focus on the output weight, we introduce the orthogonal constraint into the output weight matrix, and propose a novel orthogonal extreme learning machine (NOELM) based on the idea of optimization column by column whose main characteristic is that the optimization of complex output weight matrix is decomposed into optimizing the single column vector of the matrix. The complex orthogonal procrustes problem is transformed into simple least squares regression with an orthogonal constraint, which can preserve more information from ELM feature space to output subspace, these make NOELM more regression analysis and discrimination ability. Experiments show that NOELM has better performance in training time, testing time and accuracy than ELM and OELM.


2017 ◽  
Vol 26 (1) ◽  
pp. 185-195 ◽  
Author(s):  
Jie Wang ◽  
Liangjian Cai ◽  
Xin Zhao

AbstractAs we are usually confronted with a large instance space for real-word data sets, it is significant to develop a useful and efficient multiple-instance learning (MIL) algorithm. MIL, where training data are prepared in the form of labeled bags rather than labeled instances, is a variant of supervised learning. This paper presents a novel MIL algorithm for an extreme learning machine called MI-ELM. A radial basis kernel extreme learning machine is adapted to approach the MIL problem using Hausdorff distance to measure the distance between the bags. The clusters in the hidden layer are composed of bags that are randomly generated. Because we do not need to tune the parameters for the hidden layer, MI-ELM can learn very fast. The experimental results on classifications and multiple-instance regression data sets demonstrate that the MI-ELM is useful and efficient as compared to the state-of-the-art algorithms.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Yang Liu ◽  
Bo He ◽  
Diya Dong ◽  
Yue Shen ◽  
Tianhong Yan ◽  
...  

A novel particle swarm optimization based selective ensemble (PSOSEN) of online sequential extreme learning machine (OS-ELM) is proposed. It is based on the original OS-ELM with an adaptive selective ensemble framework. Two novel insights are proposed in this paper. First, a novel selective ensemble algorithm referred to as particle swarm optimization selective ensemble is proposed, noting that PSOSEN is a general selective ensemble method which is applicable to any learning algorithms, including batch learning and online learning. Second, an adaptive selective ensemble framework for online learning is designed to balance the accuracy and speed of the algorithm. Experiments for both regression and classification problems with UCI data sets are carried out. Comparisons between OS-ELM, simple ensemble OS-ELM (EOS-ELM), genetic algorithm based selective ensemble (GASEN) of OS-ELM, and the proposed particle swarm optimization based selective ensemble of OS-ELM empirically show that the proposed algorithm achieves good generalization performance and fast learning speed.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Qinwei Fan ◽  
Tongke Fan

Extreme learning machine (ELM), as a new simple feedforward neural network learning algorithm, has been extensively used in practical applications because of its good generalization performance and fast learning speed. However, the standard ELM requires more hidden nodes in the application due to the random assignment of hidden layer parameters, which in turn has disadvantages such as poorly hidden layer sparsity, low adjustment ability, and complex network structure. In this paper, we propose a hybrid ELM algorithm based on the bat and cuckoo search algorithm to optimize the input weight and threshold of the ELM algorithm. We test the numerical experimental performance of function approximation and classification problems under a few benchmark datasets; simulation results show that the proposed algorithm can obtain significantly better prediction accuracy compared to similar algorithms.


2020 ◽  
Vol 36 (1) ◽  
pp. 35-44 ◽  
Author(s):  
LIMPAPAT BUSSABAN ◽  
ATTAPOL KAEWKHAO ◽  
SUTHEP SUANTAI

In this paper, a novel algorithm, called parallel inertial S-iteration forward-backward algorithm (PISFBA) isproposed for finding a common fixed point of a countable family of nonexpansive mappings and convergencebehavior of PISFBA is analyzed and discussed. As applications, we apply PISFBA to estimate the weight con-necting the hidden layer and output layer in a regularized extreme learning machine. Finally, the proposedlearning algorithm is applied to solve regression and data classification problems


2021 ◽  
Vol 2107 (1) ◽  
pp. 012013
Author(s):  
Boon Pin Ooi ◽  
Norasmadi Abdul Rahim ◽  
Maz Jamilah Masnan ◽  
Ammar Zakaria

Abstract Extreme learning machine (ELM) is a special type of single hidden layer feedforward neural network that emphasizes training speed and optimal generalization. The ELM model proposes that the weights of hidden neurons need not be tuned, and the weights of output neurons can be calculated by finding the Moore-Penrose generalized inverse method. Thus, the ELM classifier is suitable to use in a homogeneous ensemble model due to the untuned random hidden weights which promote diversity even with the same training data. This paper studies the effectiveness of the ELM ensemble models in solving small sample-sized classification problems. The research involves two variants of the ensemble model: the normal ELM ensemble with majority voting (ELE), and the random subspace method (RS-ELM). To simulate the small sample cases, only 30% of the total data will be used as the training data. Experiment results show that the RS-ELM model can outperform a multi-layer perceptron (MLP) model under the assumptions of a Friedman test. Furthermore, the ELE model has similar performance as an MLP model under the same assumptions.


2008 ◽  
Vol 18 (05) ◽  
pp. 433-441 ◽  
Author(s):  
HIEU TRUNG HUYNH ◽  
YONGGWAN WON ◽  
JUNG-JA KIM

Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential evolution algorithm to select the input weights and hidden layer biases. However, this algorithm required much time for searching optimal parameters with iterative processes and was not suitable for data sets with a large number of input features. In this paper, a new approach for training SLFNs is proposed, in which the input weights and biases of hidden units are determined based on a fast regularized least-squares scheme. Experimental results for many real applications with both small and large number of input features show that our proposed approach can achieve good generalization performance with much more compact networks and extremely high speed for both learning and testing.


2015 ◽  
Vol 2015 ◽  
pp. 1-6 ◽  
Author(s):  
Jie Wang ◽  
Liangjian Cai ◽  
Jinzhu Peng ◽  
Yuheng Jia

Since real-world data sets usually contain large instances, it is meaningful to develop efficient and effective multiple instance learning (MIL) algorithm. As a learning paradigm, MIL is different from traditional supervised learning that handles the classification of bags comprising unlabeled instances. In this paper, a novel efficient method based on extreme learning machine (ELM) is proposed to address MIL problem. First, the most qualified instance is selected in each bag through a single hidden layer feedforward network (SLFN) whose input and output weights are both initialed randomly, and the single selected instance is used to represent every bag. Second, the modified ELM model is trained by using the selected instances to update the output weights. Experiments on several benchmark data sets and multiple instance regression data sets show that the ELM-MIL achieves good performance; moreover, it runs several times or even hundreds of times faster than other similar MIL algorithms.


Author(s):  
Yuancheng Li ◽  
Yaqi Cui ◽  
Xiaolong Zhang

Background: Advanced Metering Infrastructure (AMI) for the smart grid is growing rapidly which results in the exponential growth of data collected and transmitted in the device. By clustering this data, it can give the electricity company a better understanding of the personalized and differentiated needs of the user. Objective: The existing clustering algorithms for processing data generally have some problems, such as insufficient data utilization, high computational complexity and low accuracy of behavior recognition. Methods: In order to improve the clustering accuracy, this paper proposes a new clustering method based on the electrical behavior of the user. Starting with the analysis of user load characteristics, the user electricity data samples were constructed. The daily load characteristic curve was extracted through improved extreme learning machine clustering algorithm and effective index criteria. Moreover, clustering analysis was carried out for different users from industrial areas, commercial areas and residential areas. The improved extreme learning machine algorithm, also called Unsupervised Extreme Learning Machine (US-ELM), is an extension and improvement of the original Extreme Learning Machine (ELM), which realizes the unsupervised clustering task on the basis of the original ELM. Results: Four different data sets have been experimented and compared with other commonly used clustering algorithms by MATLAB programming. The experimental results show that the US-ELM algorithm has higher accuracy in processing power data. Conclusion: The unsupervised ELM algorithm can greatly reduce the time consumption and improve the effectiveness of clustering.


Sign in / Sign up

Export Citation Format

Share Document