Study on Deep Structure of Extreme Learning Machine (DS-ELM) for Datasets with Noise

2014 ◽  
Vol 989-994 ◽  
pp. 3679-3682 ◽  
Author(s):  
Meng Meng Ma ◽  
Bo He

Extreme learning machine (ELM), a relatively novel machine learning algorithm for single hidden layer feed-forward neural networks (SLFNs), has been shown competitive performance in simple structure and superior training speed. To improve the effectiveness of ELM for dealing with noisy datasets, a deep structure of ELM, short for DS-ELM, is proposed in this paper. DS-ELM contains three level networks (actually contains three nets ): the first level network is trained by auto-associative neural network (AANN) aim to filter out noise as well as reduce dimension when necessary; the second level network is another AANN net aim to fix the input weights and bias of ELM; and the last level network is ELM. Experiments on four noisy datasets are carried out to examine the new proposed DS-ELM algorithm. And the results show that DS-ELM has higher performance than ELM when dealing with noisy data.

Author(s):  
JUNHAI ZHAI ◽  
HONGYU XU ◽  
YAN LI

Extreme learning machine (ELM) is an efficient and practical learning algorithm used for training single hidden layer feed-forward neural networks (SLFNs). ELM can provide good generalization performance at extremely fast learning speed. However, ELM suffers from instability and over-fitting, especially on relatively large datasets. Based on probabilistic SLFNs, an approach of fusion of extreme learning machine (F-ELM) with fuzzy integral is proposed in this paper. The proposed algorithm consists of three stages. Firstly, the bootstrap technique is employed to generate several subsets of original dataset. Secondly, probabilistic SLFNs are trained with ELM algorithm on each subset. Finally, the trained probabilistic SLFNs are fused with fuzzy integral. The experimental results show that the proposed approach can alleviate to some extent the problems mentioned above, and can increase the prediction accuracy.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Qinwei Fan ◽  
Tongke Fan

Extreme learning machine (ELM), as a new simple feedforward neural network learning algorithm, has been extensively used in practical applications because of its good generalization performance and fast learning speed. However, the standard ELM requires more hidden nodes in the application due to the random assignment of hidden layer parameters, which in turn has disadvantages such as poorly hidden layer sparsity, low adjustment ability, and complex network structure. In this paper, we propose a hybrid ELM algorithm based on the bat and cuckoo search algorithm to optimize the input weight and threshold of the ELM algorithm. We test the numerical experimental performance of function approximation and classification problems under a few benchmark datasets; simulation results show that the proposed algorithm can obtain significantly better prediction accuracy compared to similar algorithms.


Filomat ◽  
2020 ◽  
Vol 34 (15) ◽  
pp. 4985-4996
Author(s):  
Bolin Liao ◽  
Chuan Ma ◽  
Meiling Liao ◽  
Shuai Li ◽  
Zhiguan Huang

In this paper, a novel type of feed-forward neural network with a simple structure is proposed and investigated for pattern classification. Because the novel type of forward neural network?s parameter setting is mirrored with those of the Extreme Learning Machine (ELM), it is termed the mirror extreme learning machine (MELM). For the MELM, the input weights are determined by the pseudoinverse method analytically, while the output weights are generated randomly, which are completely different from the conventional ELM. Besides, a growing method is adopted to obtain the optimal hidden-layer structure. Finally, to evaluate the performance of the proposed MELM, abundant comparative experiments based on different real-world classification datasets are performed. Experimental results validate the high classification accuracy and good generalization performance of the proposed neural network with a simple structure in pattern classification.


2012 ◽  
Vol 608-609 ◽  
pp. 564-568 ◽  
Author(s):  
Yi Hui Zhang ◽  
He Wang ◽  
Zhi Jian Hu ◽  
Meng Lin Zhang ◽  
Xiao Lu Gong ◽  
...  

Extreme learning machine (ELM) is a new and effective single-hidden layer feed forward neural network learning algorithm. Extreme learning machine only needs to set the number of hidden layer nodes of the network, and there is no need to adjust the neural network input weights and the hidden units bias, and it generates the only optimum solution, so it has the advantage of fast learning and good generalization ability. And the back propagation (BP) neural network is the most maturely applied. This paper has introduced the extreme learning machine into the wind power prediction. By comparing the wind power prediction method using the BP neural network. Study shows that the extreme learning machine has better prediction accuracy and shorter model training time.


2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Dong Xiao ◽  
Beijing Li ◽  
Yachun Mao

Extreme learning machine (ELM) is a rapid learning algorithm of the single-hidden-layer feedforward neural network, which randomly initializes the weights between the input layer and the hidden layer and the bias of hidden layer neurons and finally uses the least-squares method to calculate the weights between the hidden layer and the output layer. This paper proposes a multiple hidden layers ELM (MELM for short) which inherits the characteristics of parameters of the first hidden layer. The parameters of the remaining hidden layers are obtained by introducing a method (make the actual output zero error approach the expected hidden layer output). Based on the MELM algorithm, many experiments on regression and classification show that the MELM can achieve the satisfactory results based on average precision and good generalization performance compared to the two-hidden-layer ELM (TELM), the ELM, and some other multilayer ELM.


2018 ◽  
Vol 246 ◽  
pp. 03018
Author(s):  
Zuozhi Liu ◽  
JinJian Wu ◽  
Jianpeng Wang

Extreme learning machine (ELM) is a new novel learning algorithm for generalized single-hidden layer feedforward networks (SLFNs). Although it shows fast learning speed in many areas, there is still room for improvement in computational cost. To address this issue, this paper proposes an improved ELM (FRCFELM) which employs the full rank Cholesky factorization to compute output weights instead of traditional SVD. In addition, this paper proves in theory that the proposed FRCF-ELM has lower computational complexity. Experimental results over some benchmark applications indicate that the proposed FRCF-ELM learns faster than original ELM algorithm while preserving good generalization performance.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Imen Jammoussi ◽  
Mounir Ben Nasr

Extreme learning machine is a fast learning algorithm for single hidden layer feedforward neural network. However, an improper number of hidden neurons and random parameters have a great effect on the performance of the extreme learning machine. In order to select a suitable number of hidden neurons, this paper proposes a novel hybrid learning based on a two-step process. First, the parameters of hidden layer are adjusted by a self-organized learning algorithm. Next, the weights matrix of the output layer is determined using the Moore–Penrose inverse method. Nine classification datasets are considered to demonstrate the efficiency of the proposed approach compared with original extreme learning machine, Tikhonov regularization optimally pruned extreme learning machine, and backpropagation algorithms. The results show that the proposed method is fast and produces better accuracy and generalization performances.


2012 ◽  
Vol 241-244 ◽  
pp. 1762-1767 ◽  
Author(s):  
Ya Juan Tian ◽  
Hua Xian Pan ◽  
Xuan Chao Liu ◽  
Guo Jian Cheng

To overcome the problem of lower training speed and difficulty parameter selection in traditional support vector machine (SVM), a method based on extreme learning machine (ELM) for lithofacies recognition is presented in this paper. ELM is a new learning algorithm with single-hidden layer feedforward neural networks (SLFNN). Not only it can simplify the parameter selection process, but also improve the training speed of the network learning. By determining the optimal parameters, the lithofacies classification model is established, and the classification result of ELM is also compared to traditional SVM. The experimental results show that, ELM with less number of neurons has similar classification accuracy compared to SVM, and it is easier to select the parameters which significantly reduce the training speed. The feasibility of ELM for lithofacies recognition and the availability of the algorithm are verified and validated


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Qinwei Fan ◽  
Ting Liu

Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Because of its powerful modeling ability and it needs less human intervention, the ELM algorithm has been used widely in both regression and classification experiments. However, in order to achieve required accuracy, it needs many more hidden nodes than is typically needed by the conventional neural networks. This paper considers a new efficient learning algorithm for ELM with smoothing L0 regularization. A novel algorithm updates weights in the direction along which the overall square error is reduced the most and then this new algorithm can sparse network structure very efficiently. The numerical experiments show that the ELM algorithm with smoothing L0 regularization has less hidden nodes but better generalization performance than original ELM and ELM with L1 regularization algorithms.


Sign in / Sign up

Export Citation Format

Share Document