scholarly journals Correntropy induced loss based sparse robust graph regularized extreme learning machine for cancer classification

2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Liang-Rui Ren ◽  
Ying-Lian Gao ◽  
Jin-Xing Liu ◽  
Junliang Shang ◽  
Chun-Hou Zheng

Abstract Background As a machine learning method with high performance and excellent generalization ability, extreme learning machine (ELM) is gaining popularity in various studies. Various ELM-based methods for different fields have been proposed. However, the robustness to noise and outliers is always the main problem affecting the performance of ELM. Results In this paper, an integrated method named correntropy induced loss based sparse robust graph regularized extreme learning machine (CSRGELM) is proposed. The introduction of correntropy induced loss improves the robustness of ELM and weakens the negative effects of noise and outliers. By using the L2,1-norm to constrain the output weight matrix, we tend to obtain a sparse output weight matrix to construct a simpler single hidden layer feedforward neural network model. By introducing the graph regularization to preserve the local structural information of the data, the classification performance of the new method is further improved. Besides, we design an iterative optimization method based on the idea of half quadratic optimization to solve the non-convex problem of CSRGELM. Conclusions The classification results on the benchmark dataset show that CSRGELM can obtain better classification results compared with other methods. More importantly, we also apply the new method to the classification problems of cancer samples and get a good classification effect.

Author(s):  
Khanittha Phumrattanaprapin ◽  
Punyaphol Horata

The Deep Learning approach provides a high performance of classification, especially when invoking image classification problems. However, a shortcoming of the traditional Deep Learning method is the large time scale of training. The hierarchical extreme learning machine (H-ELM) framework was based on the hierarchical learning architecture of multilayer perceptron to address the problem. H-ELM is composed of two parts; the first entails unsupervised multilayer encoding, and the second is the supervised feature classification. H-ELM can give a higher accuracy rate than the traditional ELM. However, there still remains room to enhance its classification performance. This paper therefore proposes a new method termed the extending hierarchical extreme learning machine (EH-ELM), which extends the number of layers in the supervised portion of the H-ELM from a single layer to multiple layers. To evaluate the performance of the EH-ELM, the various classification datasets were studied and compared with the H-ELM and the multilayer ELM, as well as various state-of-the-art such deep architecture methods. The experimental results show that the EH-ELM improved the accuracy rates over most other methods.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1284
Author(s):  
Licheng Cui ◽  
Huawei Zhai ◽  
Hongfei Lin

An extreme learning machine (ELM) is an innovative algorithm for the single hidden layer feed-forward neural networks and, essentially, only exists to find the optimal output weight so as to minimize output error based on the least squares regression from the hidden layer to the output layer. With a focus on the output weight, we introduce the orthogonal constraint into the output weight matrix, and propose a novel orthogonal extreme learning machine (NOELM) based on the idea of optimization column by column whose main characteristic is that the optimization of complex output weight matrix is decomposed into optimizing the single column vector of the matrix. The complex orthogonal procrustes problem is transformed into simple least squares regression with an orthogonal constraint, which can preserve more information from ELM feature space to output subspace, these make NOELM more regression analysis and discrimination ability. Experiments show that NOELM has better performance in training time, testing time and accuracy than ELM and OELM.


2021 ◽  
Vol 7 ◽  
pp. e411
Author(s):  
Osman Altay ◽  
Mustafa Ulas ◽  
Kursat Esat Alyamac

Extreme learning machine (ELM) algorithm is widely used in regression and classification problems due to its advantages such as speed and high-performance rate. Different artificial intelligence-based optimization methods and chaotic systems have been proposed for the development of the ELM. However, a generalized solution method and success rate at the desired level could not be obtained. In this study, a new method is proposed as a result of developing the ELM algorithm used in regression problems with discrete-time chaotic systems. ELM algorithm has been improved by testing five different chaotic maps (Chebyshev, iterative, logistic, piecewise, tent) from chaotic systems. The proposed discrete-time chaotic systems based ELM (DCS-ELM) algorithm has been tested in steel fiber reinforced self-compacting concrete data sets and public four different datasets, and a result of its performance compared with the basic ELM algorithm, linear regression, support vector regression, kernel ELM algorithm and weighted ELM algorithm. It has been observed that it gives a better performance than other algorithms.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Qiang Cai ◽  
Fenghai Li ◽  
Yifan Chen ◽  
Haisheng Li ◽  
Jian Cao ◽  
...  

Along with the strong representation of the convolutional neural network (CNN), image classification tasks have achieved considerable progress. However, majority of works focus on designing complicated and redundant architectures for extracting informative features to improve classification performance. In this study, we concentrate on rectifying the incomplete outputs of CNN. To be concrete, we propose an innovative image classification method based on Label Rectification Learning (LRL) through kernel extreme learning machine (KELM). It mainly consists of two steps: (1) preclassification, extracting incomplete labels through a pretrained CNN, and (2) label rectification, rectifying the generated incomplete labels by the KELM to obtain the rectified labels. Experiments conducted on publicly available datasets demonstrate the effectiveness of our method. Notably, our method is extensible which can be easily integrated with off-the-shelf networks for improving performance.


2020 ◽  
Vol 10 (21) ◽  
pp. 7488
Author(s):  
Yutu Yang ◽  
Xiaolin Zhou ◽  
Ying Liu ◽  
Zhongkang Hu ◽  
Fenglong Ding

The deep learning feature extraction method and extreme learning machine (ELM) classification method are combined to establish a depth extreme learning machine model for wood image defect detection. The convolution neural network (CNN) algorithm alone tends to provide inaccurate defect locations, incomplete defect contour and boundary information, and inaccurate recognition of defect types. The nonsubsampled shearlet transform (NSST) is used here to preprocess the wood images, which reduces the complexity and computation of the image processing. CNN is then applied to manage the deep algorithm design of the wood images. The simple linear iterative clustering algorithm is used to improve the initial model; the obtained image features are used as ELM classification inputs. ELM has faster training speed and stronger generalization ability than other similar neural networks, but the random selection of input weights and thresholds degrades the classification accuracy. A genetic algorithm is used here to optimize the initial parameters of the ELM to stabilize the network classification performance. The depth extreme learning machine can extract high-level abstract information from the data, does not require iterative adjustment of the network weights, has high calculation efficiency, and allows CNN to effectively extract the wood defect contour. The distributed input data feature is automatically expressed in layer form by deep learning pre-training. The wood defect recognition accuracy reached 96.72% in a test time of only 187 ms.


Author(s):  
Xueping Liu ◽  
Xingzuo Yue

The kernel function has been successfully utilized in the extreme learning machine (ELM) that provides a stabilized and generalized performance and greatly reduces the computational complexity. However, the selection and optimization of the parameters constituting the most common kernel functions are tedious and time-consuming. In this study, a set of new Hermit kernel functions derived from the generalized Hermit polynomials has been proposed. The significant contributions of the proposed kernel include only one parameter selected from a small set of natural numbers; thus, the parameter optimization is greatly facilitated and excessive structural information of the sample data is retained. Consequently, the new kernel functions can be used as optimal alternatives to other common kernel functions for ELM at a rapid learning speed. The experimental results showed that the proposed kernel ELM method tends to have similar or better robustness and generalized performance at a faster learning speed than the other common kernel ELM and support vector machine methods. Consequently, when applied to human action recognition by depth video sequence, the method also achieves excellent performance, demonstrating its time-based advantage on the video image data.


2018 ◽  
Vol 22 (11) ◽  
pp. 3487-3494 ◽  
Author(s):  
Weipeng Cao ◽  
Jinzhu Gao ◽  
Zhong Ming ◽  
Shubin Cai ◽  
Zhiguang Shan

Sign in / Sign up

Export Citation Format

Share Document