scholarly journals Convolutional Extreme Learning Machines: A Systematic Review

Author(s):  
Iago Richard Rodrigues ◽  
Sebastião Rogério ◽  
Judith Kelner ◽  
Djamel Sadok ◽  
Patricia Takako Endo

Many works have recently identified the need to combine deep learning with extreme learning to strike a performance balance with accuracy especially in the domain of multimedia applications. Considering this new paradigm, namely convolutional extreme learning machine (CELM), we present a systematic review that investigates alternative deep learning architectures that use extreme learning machine (ELM) for a faster training to solve problems based on image analysis. We detail each of the architectures found in the literature, application scenarios, benchmark datasets, main results, advantages, and present the open challenges for CELM. We follow a well structured methodology and establish relevant research questions that guide our findings. We hope that the observation and classification of such works can leverage the CELM research area providing a good starting point to cope with some of the current problems in the image-based computer vision analysis.

Informatics ◽  
2021 ◽  
Vol 8 (2) ◽  
pp. 33
Author(s):  
Iago Richard Rodrigues ◽  
Sebastião Rogério da Silva Neto ◽  
Judith Kelner ◽  
Djamel Sadok ◽  
Patricia Takako Endo

Much work has recently identified the need to combine deep learning with extreme learning in order to strike a performance balance with accuracy, especially in the domain of multimedia applications. When considering this new paradigm—namely, the convolutional extreme learning machine (CELM)—we present a systematic review that investigates alternative deep learning architectures that use the extreme learning machine (ELM) for faster training to solve problems that are based on image analysis. We detail each of the architectures that are found in the literature along with their application scenarios, benchmark datasets, main results, and advantages, and then present the open challenges for CELM. We followed a well-structured methodology and established relevant research questions that guided our findings. Based on 81 primary studies, we found that object recognition is the most common problem that is solved by CELM, and CCN with predefined kernels is the most common CELM architecture proposed in the literature. The results from experiments show that CELM models present good precision, convergence, and computational performance, and they are able to decrease the total processing time that is required by the learning process. The results presented in this systematic review are expected to contribute to the research area of CELM, providing a good starting point for dealing with some of the current problems in the analysis of computer vision based on images.


2020 ◽  
Vol 13 (4) ◽  
pp. 1237-1250
Author(s):  
Deepak Kumar ◽  
Thendiyath Roshni ◽  
Anshuman Singh ◽  
Madan Kumar Jha ◽  
Pijush Samui

2020 ◽  
Vol 10 (21) ◽  
pp. 7488
Author(s):  
Yutu Yang ◽  
Xiaolin Zhou ◽  
Ying Liu ◽  
Zhongkang Hu ◽  
Fenglong Ding

The deep learning feature extraction method and extreme learning machine (ELM) classification method are combined to establish a depth extreme learning machine model for wood image defect detection. The convolution neural network (CNN) algorithm alone tends to provide inaccurate defect locations, incomplete defect contour and boundary information, and inaccurate recognition of defect types. The nonsubsampled shearlet transform (NSST) is used here to preprocess the wood images, which reduces the complexity and computation of the image processing. CNN is then applied to manage the deep algorithm design of the wood images. The simple linear iterative clustering algorithm is used to improve the initial model; the obtained image features are used as ELM classification inputs. ELM has faster training speed and stronger generalization ability than other similar neural networks, but the random selection of input weights and thresholds degrades the classification accuracy. A genetic algorithm is used here to optimize the initial parameters of the ELM to stabilize the network classification performance. The depth extreme learning machine can extract high-level abstract information from the data, does not require iterative adjustment of the network weights, has high calculation efficiency, and allows CNN to effectively extract the wood defect contour. The distributed input data feature is automatically expressed in layer form by deep learning pre-training. The wood defect recognition accuracy reached 96.72% in a test time of only 187 ms.


2014 ◽  
Vol 548-549 ◽  
pp. 1735-1738 ◽  
Author(s):  
Jian Tang ◽  
Dong Yan ◽  
Li Jie Zhao

Modeling concrete compressive strength is useful to ensure quality of civil engineering. This paper aims to compare several Extreme learning machines (ELMs) based modeling approaches for predicting the concrete compressive strength. Normal ELM algorithm, Partial least square-based extreme learning machines (PLS-ELMs) algorithm and Kernel ELM (KELM) algorithm are used and evaluated. Results indicate that the normal ELMs algorithm has the highest modeling speed, and the KELM has the best prediction accuracy. Every method is validated for modeling concrete compressive strength. The appropriate modeling approach should be selected according different purposes.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Jie Lai ◽  
Xiaodan Wang ◽  
Rui Li ◽  
Yafei Song ◽  
Lei Lei

In order to prevent the overfitting and improve the generalization performance of Extreme Learning Machine (ELM), a new regularization method, Biased DropConnect, and a new regularized ELM using the Biased DropConnect and Biased Dropout (BD-ELM) are both proposed in this paper. Like the Biased Dropout to hidden nodes, the Biased DropConnect can utilize the difference of connection weights to keep more information of network after dropping. The regular Dropout and DropConnect set the connection weights and output of the hidden layer to 0 with a single fixed probability. But the Biased DropConnect and Biased Dropout divide the connection weights and hidden nodes into high and low groups by threshold, and set different groups to 0 with different probabilities. Connection weights with high value and hidden nodes with a high-activated value, which make more contribution to network performance, will be kept by a lower drop probability, while the weights and hidden nodes with a low value will be given a higher drop probability to keep the drop probability of the whole network to a fixed constant. Using Biased DropConnect and Biased Dropout regularization, in BD-ELM, the sparsity of parameters is enhanced and the structural complexity is reduced. Experiments on various benchmark datasets show that Biased DropConnect and Biased Dropout can effectively address the overfitting, and BD-ELM can provide higher classification accuracy than ELM, R-ELM, and Drop-ELM.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Xinran Zhou ◽  
Zijian Liu ◽  
Congxu Zhu

To apply the single hidden-layer feedforward neural networks (SLFN) to identify time-varying system, online regularized extreme learning machine (ELM) with forgetting mechanism (FORELM) and online kernelized ELM with forgetting mechanism (FOKELM) are presented in this paper. The FORELM updates the output weights of SLFN recursively by using Sherman-Morrison formula, and it combines advantages of online sequential ELM with forgetting mechanism (FOS-ELM) and regularized online sequential ELM (ReOS-ELM); that is, it can capture the latest properties of identified system by studying a certain number of the newest samples and also can avoid issue of ill-conditioned matrix inversion by regularization. The FOKELM tackles the problem of matrix expansion of kernel based incremental ELM (KB-IELM) by deleting the oldest sample according to the block matrix inverse formula when samples occur continually. The experimental results show that the proposed FORELM and FOKELM have better stability than FOS-ELM and have higher accuracy than ReOS-ELM in nonstationary environments; moreover, FORELM and FOKELM have time efficiencies superiority over dynamic regression extreme learning machine (DR-ELM) under certain conditions.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Shan Pang ◽  
Xinyi Yang

In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.


Sign in / Sign up

Export Citation Format

Share Document