Applying a new localized generalization error model to design neural networks trained with extreme learning machine

2014 ◽  
Vol 27 (1) ◽  
pp. 59-66 ◽  
Author(s):  
Qiang Liu ◽  
Jianping Yin ◽  
Victor C. M. Leung ◽  
Jun-Hai Zhai ◽  
Zhiping Cai ◽  
...  
Author(s):  
Shuxiang Xu

An Extreme Learning Machine (ELM) randomly chooses hidden neurons and analytically determines the output weights (Huang, et al., 2005, 2006, 2008). With the ELM algorithm, only the connection weights between hidden layer and output layer are adjusted. The ELM algorithm tends to generalize better at a very fast learning speed: it can learn thousands of times faster than conventionally popular learning algorithms (Huang, et al., 2006). Artificial Neural Networks (ANNs) have been widely used as powerful information processing models and adopted in applications such as bankruptcy prediction, predicting costs, forecasting revenue, forecasting share prices and exchange rates, processing documents, and many more. Higher Order Neural Networks (HONNs) are ANNs in which the net input to a computational neuron is a weighted sum of products of its inputs. Real life data are not usually perfect. They contain wrong, incomplete, or vague data. Hence, it is usual to find missing data in many information sources used. Missing data is a common problem in statistical analysis (Little & Rubin, 1987). This chapter uses the Extreme Learning Machine (ELM) algorithm for HONN models and applies it in several significant business cases, which involve missing datasets. The experimental results demonstrate that HONN models with the ELM algorithm offer significant advantages over standard HONN models, such as faster training, as well as improved generalization abilities.


2019 ◽  
Vol 361 ◽  
pp. 196-211 ◽  
Author(s):  
Carlos Perales-González ◽  
Mariano Carbonero-Ruz ◽  
David Becerra-Alonso ◽  
Javier Pérez-Rodríguez ◽  
Francisco Fernández-Navarro

Sign in / Sign up

Export Citation Format

Share Document