AN IMPROVEMENT OF EXTREME LEARNING MACHINE FOR COMPACT SINGLE-HIDDEN-LAYER FEEDFORWARD NEURAL NETWORKS

2008 ◽  
Vol 18 (05) ◽  
pp. 433-441 ◽  
Author(s):  
HIEU TRUNG HUYNH ◽  
YONGGWAN WON ◽  
JUNG-JA KIM

Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential evolution algorithm to select the input weights and hidden layer biases. However, this algorithm required much time for searching optimal parameters with iterative processes and was not suitable for data sets with a large number of input features. In this paper, a new approach for training SLFNs is proposed, in which the input weights and biases of hidden units are determined based on a fast regularized least-squares scheme. Experimental results for many real applications with both small and large number of input features show that our proposed approach can achieve good generalization performance with much more compact networks and extremely high speed for both learning and testing.

2017 ◽  
Vol 1 (1) ◽  
pp. 22-32
Author(s):  
Afifah Arifianty ◽  
Mulyono Mulyono ◽  
Med Irzal

Abstrak Indeks Harga Saham Gabungan (IHSG) merupakan suatu nilai untuk mengukur kinerja seluruh saham. IHSG mencerminkan perkembangan pasar secara keseluruhan. Jika IHSG mengalami kenaikan dari hari kemarin maka dapat disimpulkan beberapa saham yang berada pada bursa efek mengalami kenaikan. Oleh karena itu, peramalan harga akan sangat bermanfaat untuk para investor, sehingga mereka dapat mengetahui prospek investasi saham di masa datang. Ada banyak metode untuk peramalan. Tetapi, metode-metode yang telah ada sebelumnya membutuhkan waktu komputasi yang relatif lebih lama. Metode Jaringan Syaraf Tiruan(JST) dikhawatirkan akan semakin ditinggalkan karena diperlukan waktu yang lama dalam pengambilan keputusan. Untuk mengatasi masalah, Huang (2004) menemukan sebuah metode pembelajaran dalam JST bernama Extreme Learning Machine (ELM). ELM merupakan jaringan syaraf tiruan feedforward dengan satu hidden layer atau lebih dikenal dengan istilah Single hidden Layer Feedforward neural Networks(SLFNs) (Sun et al, 2008). Pada metode ini, faktor yang digunakan dalam peramalan hanya faktor data masa lalu, bukan disebabkan faktor lain seperti politik, ekonomi dan lain-lain. Kata kunci: Indeks Harga Saham Gabungan, Peramalan, Jaringan Syaraf Tiruan, Extreme Learning Machine.


2012 ◽  
Vol 241-244 ◽  
pp. 1762-1767 ◽  
Author(s):  
Ya Juan Tian ◽  
Hua Xian Pan ◽  
Xuan Chao Liu ◽  
Guo Jian Cheng

To overcome the problem of lower training speed and difficulty parameter selection in traditional support vector machine (SVM), a method based on extreme learning machine (ELM) for lithofacies recognition is presented in this paper. ELM is a new learning algorithm with single-hidden layer feedforward neural networks (SLFNN). Not only it can simplify the parameter selection process, but also improve the training speed of the network learning. By determining the optimal parameters, the lithofacies classification model is established, and the classification result of ELM is also compared to traditional SVM. The experimental results show that, ELM with less number of neurons has similar classification accuracy compared to SVM, and it is easier to select the parameters which significantly reduce the training speed. The feasibility of ELM for lithofacies recognition and the availability of the algorithm are verified and validated


Author(s):  
Fan Wu ◽  
Si Hong ◽  
Wei Zhao ◽  
Xiaoyan Wang ◽  
Xun Shao ◽  
...  

AbstractAccurate demand prediction of bike-sharing is an important prerequisite to reducing the cost of scheduling and improving the user satisfaction. However, it is a challenging issue due to stochasticity and non-linearity in bike-sharing systems. In this paper, a model called pseudo-double hidden layer feedforward neural networks is proposed to approximately predict actual demands of bike-sharing. Specifically, to overcome limitations in traditional back-propagation learning process, an algorithm, an extreme learning machine with improved particle swarm optimization, is designed to construct learning rules in neural networks. The performance is verified by comparing with other learning algorithms on the dataset of Streeter Dr bike-sharing station in Chicago.


2014 ◽  
Vol 989-994 ◽  
pp. 3679-3682 ◽  
Author(s):  
Meng Meng Ma ◽  
Bo He

Extreme learning machine (ELM), a relatively novel machine learning algorithm for single hidden layer feed-forward neural networks (SLFNs), has been shown competitive performance in simple structure and superior training speed. To improve the effectiveness of ELM for dealing with noisy datasets, a deep structure of ELM, short for DS-ELM, is proposed in this paper. DS-ELM contains three level networks (actually contains three nets ): the first level network is trained by auto-associative neural network (AANN) aim to filter out noise as well as reduce dimension when necessary; the second level network is another AANN net aim to fix the input weights and bias of ELM; and the last level network is ELM. Experiments on four noisy datasets are carried out to examine the new proposed DS-ELM algorithm. And the results show that DS-ELM has higher performance than ELM when dealing with noisy data.


Author(s):  
Shuxiang Xu

An Extreme Learning Machine (ELM) randomly chooses hidden neurons and analytically determines the output weights (Huang, et al., 2005, 2006, 2008). With the ELM algorithm, only the connection weights between hidden layer and output layer are adjusted. The ELM algorithm tends to generalize better at a very fast learning speed: it can learn thousands of times faster than conventionally popular learning algorithms (Huang, et al., 2006). Artificial Neural Networks (ANNs) have been widely used as powerful information processing models and adopted in applications such as bankruptcy prediction, predicting costs, forecasting revenue, forecasting share prices and exchange rates, processing documents, and many more. Higher Order Neural Networks (HONNs) are ANNs in which the net input to a computational neuron is a weighted sum of products of its inputs. Real life data are not usually perfect. They contain wrong, incomplete, or vague data. Hence, it is usual to find missing data in many information sources used. Missing data is a common problem in statistical analysis (Little & Rubin, 1987). This chapter uses the Extreme Learning Machine (ELM) algorithm for HONN models and applies it in several significant business cases, which involve missing datasets. The experimental results demonstrate that HONN models with the ELM algorithm offer significant advantages over standard HONN models, such as faster training, as well as improved generalization abilities.


Author(s):  
JUNHAI ZHAI ◽  
HONGYU XU ◽  
YAN LI

Extreme learning machine (ELM) is an efficient and practical learning algorithm used for training single hidden layer feed-forward neural networks (SLFNs). ELM can provide good generalization performance at extremely fast learning speed. However, ELM suffers from instability and over-fitting, especially on relatively large datasets. Based on probabilistic SLFNs, an approach of fusion of extreme learning machine (F-ELM) with fuzzy integral is proposed in this paper. The proposed algorithm consists of three stages. Firstly, the bootstrap technique is employed to generate several subsets of original dataset. Secondly, probabilistic SLFNs are trained with ELM algorithm on each subset. Finally, the trained probabilistic SLFNs are fused with fuzzy integral. The experimental results show that the proposed approach can alleviate to some extent the problems mentioned above, and can increase the prediction accuracy.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Xinran Zhou ◽  
Zijian Liu ◽  
Congxu Zhu

To apply the single hidden-layer feedforward neural networks (SLFN) to identify time-varying system, online regularized extreme learning machine (ELM) with forgetting mechanism (FORELM) and online kernelized ELM with forgetting mechanism (FOKELM) are presented in this paper. The FORELM updates the output weights of SLFN recursively by using Sherman-Morrison formula, and it combines advantages of online sequential ELM with forgetting mechanism (FOS-ELM) and regularized online sequential ELM (ReOS-ELM); that is, it can capture the latest properties of identified system by studying a certain number of the newest samples and also can avoid issue of ill-conditioned matrix inversion by regularization. The FOKELM tackles the problem of matrix expansion of kernel based incremental ELM (KB-IELM) by deleting the oldest sample according to the block matrix inverse formula when samples occur continually. The experimental results show that the proposed FORELM and FOKELM have better stability than FOS-ELM and have higher accuracy than ReOS-ELM in nonstationary environments; moreover, FORELM and FOKELM have time efficiencies superiority over dynamic regression extreme learning machine (DR-ELM) under certain conditions.


2017 ◽  
Vol 26 (1) ◽  
pp. 185-195 ◽  
Author(s):  
Jie Wang ◽  
Liangjian Cai ◽  
Xin Zhao

AbstractAs we are usually confronted with a large instance space for real-word data sets, it is significant to develop a useful and efficient multiple-instance learning (MIL) algorithm. MIL, where training data are prepared in the form of labeled bags rather than labeled instances, is a variant of supervised learning. This paper presents a novel MIL algorithm for an extreme learning machine called MI-ELM. A radial basis kernel extreme learning machine is adapted to approach the MIL problem using Hausdorff distance to measure the distance between the bags. The clusters in the hidden layer are composed of bags that are randomly generated. Because we do not need to tune the parameters for the hidden layer, MI-ELM can learn very fast. The experimental results on classifications and multiple-instance regression data sets demonstrate that the MI-ELM is useful and efficient as compared to the state-of-the-art algorithms.


Author(s):  
Qingsong Xu

Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks. In theory, this algorithm is able to provide good generalization capability at extremely fast learning speed. Comparative studies of benchmark function approximation problems revealed that ELM can learn thousands of times faster than conventional neural network (NN) and can produce good generalization performance in most cases. Unfortunately, the research on damage localization using ELM is limited in the literature. In this chapter, the ELM is extended to the domain of damage localization of plate structures. Its effectiveness in comparison with typical neural networks such as back-propagation neural network (BPNN) and least squares support vector machine (LSSVM) is illustrated through experimental studies. Comparative investigations in terms of learning time and localization accuracy are carried out in detail. It is shown that ELM paves a new way in the domain of plate structure health monitoring. Both advantages and disadvantages of using ELM are discussed.


Sign in / Sign up

Export Citation Format

Share Document