scholarly journals Extreme Learning Machines on High Dimensional and Large Data Applications: A Survey

2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Jiuwen Cao ◽  
Zhiping Lin

Extreme learning machine (ELM) has been developed for single hidden layer feedforward neural networks (SLFNs). In ELM algorithm, the connections between the input layer and the hidden neurons are randomly assigned and remain unchanged during the learning process. The output connections are then tuned via minimizing the cost function through a linear system. The computational burden of ELM has been significantly reduced as the only cost is solving a linear system. The low computational complexity attracted a great deal of attention from the research community, especially for high dimensional and large data applications. This paper provides an up-to-date survey on the recent developments of ELM and its applications in high dimensional and large data. Comprehensive reviews on image processing, video processing, medical signal processing, and other popular large data applications with ELM are presented in the paper.

Author(s):  
Fan Wu ◽  
Si Hong ◽  
Wei Zhao ◽  
Xiaoyan Wang ◽  
Xun Shao ◽  
...  

AbstractAccurate demand prediction of bike-sharing is an important prerequisite to reducing the cost of scheduling and improving the user satisfaction. However, it is a challenging issue due to stochasticity and non-linearity in bike-sharing systems. In this paper, a model called pseudo-double hidden layer feedforward neural networks is proposed to approximately predict actual demands of bike-sharing. Specifically, to overcome limitations in traditional back-propagation learning process, an algorithm, an extreme learning machine with improved particle swarm optimization, is designed to construct learning rules in neural networks. The performance is verified by comparing with other learning algorithms on the dataset of Streeter Dr bike-sharing station in Chicago.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Xinran Zhou ◽  
Zijian Liu ◽  
Congxu Zhu

To apply the single hidden-layer feedforward neural networks (SLFN) to identify time-varying system, online regularized extreme learning machine (ELM) with forgetting mechanism (FORELM) and online kernelized ELM with forgetting mechanism (FOKELM) are presented in this paper. The FORELM updates the output weights of SLFN recursively by using Sherman-Morrison formula, and it combines advantages of online sequential ELM with forgetting mechanism (FOS-ELM) and regularized online sequential ELM (ReOS-ELM); that is, it can capture the latest properties of identified system by studying a certain number of the newest samples and also can avoid issue of ill-conditioned matrix inversion by regularization. The FOKELM tackles the problem of matrix expansion of kernel based incremental ELM (KB-IELM) by deleting the oldest sample according to the block matrix inverse formula when samples occur continually. The experimental results show that the proposed FORELM and FOKELM have better stability than FOS-ELM and have higher accuracy than ReOS-ELM in nonstationary environments; moreover, FORELM and FOKELM have time efficiencies superiority over dynamic regression extreme learning machine (DR-ELM) under certain conditions.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Hai-Gang Zhang ◽  
Sen Zhang ◽  
Yi-Xin Yin

It is well known that the feedforward neural networks meet numbers of difficulties in the applications because of its slow learning speed. The extreme learning machine (ELM) is a new single hidden layer feedforward neural network method aiming at improving the training speed. Nowadays ELM algorithm has received wide application with its good generalization performance under fast learning speed. However, there are still several problems needed to be solved in ELM. In this paper, a new improved ELM algorithm named R-ELM is proposed to handle the multicollinear problem appearing in calculation of the ELM algorithm. The proposed algorithm is employed in bearing fault detection using stator current monitoring. Simulative results show that R-ELM algorithm has better stability and generalization performance compared with the original ELM and the other neural network methods.


2017 ◽  
Vol 1 (1) ◽  
pp. 22-32
Author(s):  
Afifah Arifianty ◽  
Mulyono Mulyono ◽  
Med Irzal

Abstrak Indeks Harga Saham Gabungan (IHSG) merupakan suatu nilai untuk mengukur kinerja seluruh saham. IHSG mencerminkan perkembangan pasar secara keseluruhan. Jika IHSG mengalami kenaikan dari hari kemarin maka dapat disimpulkan beberapa saham yang berada pada bursa efek mengalami kenaikan. Oleh karena itu, peramalan harga akan sangat bermanfaat untuk para investor, sehingga mereka dapat mengetahui prospek investasi saham di masa datang. Ada banyak metode untuk peramalan. Tetapi, metode-metode yang telah ada sebelumnya membutuhkan waktu komputasi yang relatif lebih lama. Metode Jaringan Syaraf Tiruan(JST) dikhawatirkan akan semakin ditinggalkan karena diperlukan waktu yang lama dalam pengambilan keputusan. Untuk mengatasi masalah, Huang (2004) menemukan sebuah metode pembelajaran dalam JST bernama Extreme Learning Machine (ELM). ELM merupakan jaringan syaraf tiruan feedforward dengan satu hidden layer atau lebih dikenal dengan istilah Single hidden Layer Feedforward neural Networks(SLFNs) (Sun et al, 2008). Pada metode ini, faktor yang digunakan dalam peramalan hanya faktor data masa lalu, bukan disebabkan faktor lain seperti politik, ekonomi dan lain-lain. Kata kunci: Indeks Harga Saham Gabungan, Peramalan, Jaringan Syaraf Tiruan, Extreme Learning Machine.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Imen Jammoussi ◽  
Mounir Ben Nasr

Extreme learning machine is a fast learning algorithm for single hidden layer feedforward neural network. However, an improper number of hidden neurons and random parameters have a great effect on the performance of the extreme learning machine. In order to select a suitable number of hidden neurons, this paper proposes a novel hybrid learning based on a two-step process. First, the parameters of hidden layer are adjusted by a self-organized learning algorithm. Next, the weights matrix of the output layer is determined using the Moore–Penrose inverse method. Nine classification datasets are considered to demonstrate the efficiency of the proposed approach compared with original extreme learning machine, Tikhonov regularization optimally pruned extreme learning machine, and backpropagation algorithms. The results show that the proposed method is fast and produces better accuracy and generalization performances.


2012 ◽  
Vol 241-244 ◽  
pp. 1762-1767 ◽  
Author(s):  
Ya Juan Tian ◽  
Hua Xian Pan ◽  
Xuan Chao Liu ◽  
Guo Jian Cheng

To overcome the problem of lower training speed and difficulty parameter selection in traditional support vector machine (SVM), a method based on extreme learning machine (ELM) for lithofacies recognition is presented in this paper. ELM is a new learning algorithm with single-hidden layer feedforward neural networks (SLFNN). Not only it can simplify the parameter selection process, but also improve the training speed of the network learning. By determining the optimal parameters, the lithofacies classification model is established, and the classification result of ELM is also compared to traditional SVM. The experimental results show that, ELM with less number of neurons has similar classification accuracy compared to SVM, and it is easier to select the parameters which significantly reduce the training speed. The feasibility of ELM for lithofacies recognition and the availability of the algorithm are verified and validated


2013 ◽  
Vol 765-767 ◽  
pp. 1854-1857
Author(s):  
Feng Wang ◽  
Jin Lin Ding ◽  
Hong Sun

Neural network generalized inverse (NNGI) can realize two-motor synchronous decoupling control, but traditional neural network (NN) exists many shortcomings, Regular extreme learning machine (RELM) has fast learning and good generalization ability, which is an ideal approach to approximate inverse system. But it is difficult to accurately give the reasonable number of hidden neurons. Improved incremental RELM(IIRELM) is prospected on the basis of analyzing RELM learning algorithm, which can automatically determine optimal network structure through gradually adding new hidden-layer neurons, and prediction model based on IIRELM is applied in two-motor closed-loop control based on NNGI, the decoupling control between velocity and tension is realized. The experimental results proved that the system has excellent performance.


2008 ◽  
Vol 18 (05) ◽  
pp. 433-441 ◽  
Author(s):  
HIEU TRUNG HUYNH ◽  
YONGGWAN WON ◽  
JUNG-JA KIM

Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential evolution algorithm to select the input weights and hidden layer biases. However, this algorithm required much time for searching optimal parameters with iterative processes and was not suitable for data sets with a large number of input features. In this paper, a new approach for training SLFNs is proposed, in which the input weights and biases of hidden units are determined based on a fast regularized least-squares scheme. Experimental results for many real applications with both small and large number of input features show that our proposed approach can achieve good generalization performance with much more compact networks and extremely high speed for both learning and testing.


2015 ◽  
Vol 2015 ◽  
pp. 1-2 ◽  
Author(s):  
Zhiping Lin ◽  
Jiuwen Cao ◽  
Tao Chen ◽  
Yi Jin ◽  
Zhan-Li Sun ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document