scholarly journals Rank based Pseudoinverse Computation in Extreme Learning Machine for large Datasets

Extreme Learning Machine (ELM) is an efficient and effective least-square-based learning algorithm for classification, regression problems based on single hidden layer feed-forward neural network (SLFN). It has been shown in the literature that it has faster convergence and good generalization ability for moderate datasets. But, there is great deal of challenge involved in computing the pseudoinverse when there are large numbers of hidden nodes or for large number of instances to train complex pattern recognition problems. To address this problem, a few approaches such as EM-ELM, DF-ELM have been proposed in the literature. In this paper, a new rank-based matrix decomposition of the hidden layer matrix is introduced to have the optimal training time and reduce the computational complexity for a large number of hidden nodes in the hidden layer. The results show that it has constant training time which is closer towards the minimal training time and very far from worst-case training time of the DF-ELM algorithm that has been shown efficient in the recent literature.

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Zhike Zhao ◽  
Xiaoguang Zhang

An improved classification approach is proposed to solve the hot research problem of some complex multiclassification samples based on extreme learning machine (ELM). ELM was proposed based on the single-hidden layer feed-forward neural network (SLFNN). ELM is characterized by the easier parameter selection rules, the faster converge speed, the less human intervention, and so on. In order to further improve the classification precision of ELM, an improved generation method of the network structure of ELM is developed by dynamically adjusting the number of hidden nodes. The number change of the hidden nodes can serve as the computational updated step length of the ELM algorithm. In this paper, the improved algorithm can be called the variable step incremental extreme learning machine (VSI-ELM). In order to verify the effect of the hidden layer nodes on the performance of ELM, an open-source machine learning database (University of California, Irvine (UCI)) is provided by the performance test data sets. The regression and classification experiments are used to study the performance of the VSI-ELM model, respectively. The experimental results show that the VSI-ELM algorithm is valid. The classification of different degrees of broken wires is now still a problem in the nondestructive testing of hoisting wire rope. The magnetic flux leakage (MFL) method of wire rope is an efficient nondestructive method which plays an important role in safety evaluation. Identifying the proposed VSI-ELM model is effective and reliable for actually applying data, and it is used to identify the classification problem of different types of samples from MFL signals. The final experimental results show that the VSI-ELM algorithm is of faster classification speed and higher classification accuracy of different broken wires.


2016 ◽  
Vol 8 (1) ◽  
pp. 5-15
Author(s):  
Liu Yusong ◽  
Su Zhixun ◽  
Zhang Bingjie ◽  
Gong Xiaoling ◽  
Sang Zhaoyang

Abstract Extreme learning machine (ELM) is an efficient algorithm, but it requires more hidden nodes than the BP algorithms to reach the matched performance. Recently, an efficient learning algorithm, the upper-layer-solution-unaware algorithm (USUA), is proposed for the single-hidden layer feed-forward neural network. It needs less number of hidden nodes and testing time than ELM. In this paper, we mainly give the theoretical analysis for USUA. Theoretical results show that the error function monotonously decreases in the training procedure, the gradient of the error function with respect to weights tends to zero (the weak convergence), and the weight sequence goes to a fixed point (the strong convergence) when the iterations approach positive infinity. An illustrated simulation has been implemented on the MNIST database of handwritten digits which effectively verifies the theoretical results..


2012 ◽  
Vol 608-609 ◽  
pp. 564-568 ◽  
Author(s):  
Yi Hui Zhang ◽  
He Wang ◽  
Zhi Jian Hu ◽  
Meng Lin Zhang ◽  
Xiao Lu Gong ◽  
...  

Extreme learning machine (ELM) is a new and effective single-hidden layer feed forward neural network learning algorithm. Extreme learning machine only needs to set the number of hidden layer nodes of the network, and there is no need to adjust the neural network input weights and the hidden units bias, and it generates the only optimum solution, so it has the advantage of fast learning and good generalization ability. And the back propagation (BP) neural network is the most maturely applied. This paper has introduced the extreme learning machine into the wind power prediction. By comparing the wind power prediction method using the BP neural network. Study shows that the extreme learning machine has better prediction accuracy and shorter model training time.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Qinwei Fan ◽  
Ting Liu

Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Because of its powerful modeling ability and it needs less human intervention, the ELM algorithm has been used widely in both regression and classification experiments. However, in order to achieve required accuracy, it needs many more hidden nodes than is typically needed by the conventional neural networks. This paper considers a new efficient learning algorithm for ELM with smoothing L0 regularization. A novel algorithm updates weights in the direction along which the overall square error is reduced the most and then this new algorithm can sparse network structure very efficiently. The numerical experiments show that the ELM algorithm with smoothing L0 regularization has less hidden nodes but better generalization performance than original ELM and ELM with L1 regularization algorithms.


2014 ◽  
Vol 989-994 ◽  
pp. 3679-3682 ◽  
Author(s):  
Meng Meng Ma ◽  
Bo He

Extreme learning machine (ELM), a relatively novel machine learning algorithm for single hidden layer feed-forward neural networks (SLFNs), has been shown competitive performance in simple structure and superior training speed. To improve the effectiveness of ELM for dealing with noisy datasets, a deep structure of ELM, short for DS-ELM, is proposed in this paper. DS-ELM contains three level networks (actually contains three nets ): the first level network is trained by auto-associative neural network (AANN) aim to filter out noise as well as reduce dimension when necessary; the second level network is another AANN net aim to fix the input weights and bias of ELM; and the last level network is ELM. Experiments on four noisy datasets are carried out to examine the new proposed DS-ELM algorithm. And the results show that DS-ELM has higher performance than ELM when dealing with noisy data.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1284
Author(s):  
Licheng Cui ◽  
Huawei Zhai ◽  
Hongfei Lin

An extreme learning machine (ELM) is an innovative algorithm for the single hidden layer feed-forward neural networks and, essentially, only exists to find the optimal output weight so as to minimize output error based on the least squares regression from the hidden layer to the output layer. With a focus on the output weight, we introduce the orthogonal constraint into the output weight matrix, and propose a novel orthogonal extreme learning machine (NOELM) based on the idea of optimization column by column whose main characteristic is that the optimization of complex output weight matrix is decomposed into optimizing the single column vector of the matrix. The complex orthogonal procrustes problem is transformed into simple least squares regression with an orthogonal constraint, which can preserve more information from ELM feature space to output subspace, these make NOELM more regression analysis and discrimination ability. Experiments show that NOELM has better performance in training time, testing time and accuracy than ELM and OELM.


Author(s):  
JUNHAI ZHAI ◽  
HONGYU XU ◽  
YAN LI

Extreme learning machine (ELM) is an efficient and practical learning algorithm used for training single hidden layer feed-forward neural networks (SLFNs). ELM can provide good generalization performance at extremely fast learning speed. However, ELM suffers from instability and over-fitting, especially on relatively large datasets. Based on probabilistic SLFNs, an approach of fusion of extreme learning machine (F-ELM) with fuzzy integral is proposed in this paper. The proposed algorithm consists of three stages. Firstly, the bootstrap technique is employed to generate several subsets of original dataset. Secondly, probabilistic SLFNs are trained with ELM algorithm on each subset. Finally, the trained probabilistic SLFNs are fused with fuzzy integral. The experimental results show that the proposed approach can alleviate to some extent the problems mentioned above, and can increase the prediction accuracy.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Jie Lai ◽  
Xiaodan Wang ◽  
Rui Li ◽  
Yafei Song ◽  
Lei Lei

In order to prevent the overfitting and improve the generalization performance of Extreme Learning Machine (ELM), a new regularization method, Biased DropConnect, and a new regularized ELM using the Biased DropConnect and Biased Dropout (BD-ELM) are both proposed in this paper. Like the Biased Dropout to hidden nodes, the Biased DropConnect can utilize the difference of connection weights to keep more information of network after dropping. The regular Dropout and DropConnect set the connection weights and output of the hidden layer to 0 with a single fixed probability. But the Biased DropConnect and Biased Dropout divide the connection weights and hidden nodes into high and low groups by threshold, and set different groups to 0 with different probabilities. Connection weights with high value and hidden nodes with a high-activated value, which make more contribution to network performance, will be kept by a lower drop probability, while the weights and hidden nodes with a low value will be given a higher drop probability to keep the drop probability of the whole network to a fixed constant. Using Biased DropConnect and Biased Dropout regularization, in BD-ELM, the sparsity of parameters is enhanced and the structural complexity is reduced. Experiments on various benchmark datasets show that Biased DropConnect and Biased Dropout can effectively address the overfitting, and BD-ELM can provide higher classification accuracy than ELM, R-ELM, and Drop-ELM.


Algorithms ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 158 ◽  
Author(s):  
Sathya Madhusudhanan ◽  
Suresh Jaganathan ◽  
Jayashree L S

Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it becomes difficult to train and create a model from scratch each time. Incremental learning, a self-adaptive algorithm uses the previously learned model information, then learns and accommodates new information from the newly arrived data providing a new model, which avoids the retraining. The incrementally learned knowledge helps to classify the unstructured data. In this paper, we propose a framework CUIL (Classification of Unstructured data using Incremental Learning) which clusters the metadata, assigns a label for each cluster and then creates a model using Extreme Learning Machine (ELM), a feed-forward neural network, incrementally for each batch of data arrived. The proposed framework trains the batches separately, reducing the memory resources, training time significantly and is tested with metadata created for the standard image datasets like MNIST, STL-10, CIFAR-10, Caltech101, and Caltech256. Based on the tabulated results, our proposed work proves to show greater accuracy and efficiency.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Shan Pang ◽  
Xinyi Yang

In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.


Sign in / Sign up

Export Citation Format

Share Document