scholarly journals A GA-Based Multi-View, Multi-Learner Active Learning Framework for Hyperspectral Image Classification

2020 ◽  
Vol 12 (2) ◽  
pp. 297 ◽  
Author(s):  
Nasehe Jamshidpour ◽  
Abdolreza Safari ◽  
Saeid Homayouni

This paper introduces a novel multi-view multi-learner (MVML) active learning method, in which the different views are generated by a genetic algorithm (GA). The GA-based view generation method attempts to construct diverse, sufficient, and independent views by considering both inter- and intra-view confidences. Hyperspectral data inherently owns high dimensionality, which makes it suitable for multi-view learning algorithms. Furthermore, by employing multiple learners at each view, a more accurate estimation of the underlying data distribution can be obtained. We also implemented a spectral-spatial graph-based semi-supervised learning (SSL) method as the classifier, which improved the performance of the classification task in comparison with supervised learning. The evaluation of the proposed method was based on three different benchmark hyperspectral data sets. The results were also compared with other state-of-the-art AL-SSL methods. The experimental results demonstrated the efficiency and statistically significant superiority of the proposed method. The GA-MVML AL method improved the classification performances by 16.68%, 18.37%, and 15.1% for different data sets after 40 iterations.

Author(s):  
Zhijing Ye ◽  
Hong Li ◽  
Yalong Song ◽  
Jianzhong Wang ◽  
Jon Atli Benediktsson

In this paper, we propose a novel semi-supervised learning classification framework using box-based smooth ordering and multiple 1D-embedding-based interpolation (M1DEI) in [J. Wang, Semi-supervised learning using multiple one-dimensional embedding-based adaptive interpolation, Int. J. Wavelets Multiresolut. Inf. Process. 14(2) (2016) 11 pp.] for hyperspectral images. Due to the lack of labeled samples, conventional supervised approaches cannot generally perform efficient enough. On the other hand, obtaining labeled samples for hyperspectral image classification is difficult, expensive and time-consuming, while unlabeled samples are easily available. The proposed method can effectively overcome the lack of labeled samples by introducing new labeled samples from unlabeled samples in a label boosting framework. Furthermore, the proposed method uses spatial information from the pixels in the neighborhood of the current pixel to better catch the features of hyperspectral image. The proposed idea is that, first, we extract the box (cube data) of each pixel from its neighborhood, then apply multiple 1D interpolation to construct the classifier. Experimental results on three hyperspectral data sets demonstrate that the proposed method is efficient, and outperforms recent popular semi-supervised methods in terms of accuracies.


2019 ◽  
Vol 85 (11) ◽  
pp. 841-851
Author(s):  
Ying Cui ◽  
Xiaowei Ji ◽  
Kai Xu ◽  
Liguo Wang

Applying limited labeled samples to improve classification results is a challenge in hyperspectral images. Active Learning (AL) and Semisupervised Learning (SSL) are two promising techniques to achieve this challenge. Combining AL with SSL is an excellent idea for hyperspectral image classification. The traditional method, such as the Collaborative Active and Semisupervised Learning algorithm (CASSL), may introduce many incorrect pseudolabels and shows premature convergence. To overcome these drawbacks, a novel framework named Double-Strategy-Check Collaborative Active and Semisupervised Learning (DSC-CASSL) is proposed in this paper. This framework combines two different AL algorithms and SSL in a collaborative mode. The double-strategy verification can gradually improve the pseudolabeling accuracy and facilitate SSL. We evaluate the performance of DSC-CASSL on four hyperspectral data sets and compare it with that of four hyperspectral image classification methods. Our results suggest that DSC-CASSL leads to consistent improvement for hyperspectral image classification.


PLoS ONE ◽  
2018 ◽  
Vol 13 (1) ◽  
pp. e0188996 ◽  
Author(s):  
Muhammad Ahmad ◽  
Stanislav Protasov ◽  
Adil Mehmood Khan ◽  
Rasheed Hussain ◽  
Asad Masood Khattak ◽  
...  

2021 ◽  
Vol 87 (6) ◽  
pp. 445-455
Author(s):  
Yi Ma ◽  
Zezhong Zheng ◽  
Yutang Ma ◽  
Mingcang Zhu ◽  
Ran Huang ◽  
...  

Many manifold learning algorithms conduct an eigen vector analysis on a data-similarity matrix with a size of N×N, where N is the number of data points. Thus, the memory complexity of the analysis is no less than O(N2). We pres- ent in this article an incremental manifold learning approach to handle large hyperspectral data sets for land use identification. In our method, the number of dimensions for the high-dimensional hyperspectral-image data set is obtained with the training data set. A local curvature varia- tion algorithm is utilized to sample a subset of data points as landmarks. Then a manifold skeleton is identified based on the landmarks. Our method is validated on three AVIRIS hyperspectral data sets, outperforming the comparison algorithms with a k–nearest-neighbor classifier and achieving the second best performance with support vector machine.


2019 ◽  
Vol 11 (9) ◽  
pp. 1114
Author(s):  
Sixiu Hu ◽  
Jiangtao Peng ◽  
Yingxiong Fu ◽  
Luoqing Li

By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to the presence of noisy or inhomogeneous pixels around the central testing pixel in the spatial domain, the performance of KJSR is greatly affected. Motivated by the idea of self-paced learning (SPL), this paper proposes a self-paced KJSR (SPKJSR) model to adaptively learn weights and sparse coefficient vectors for different neighboring pixels in the kernel-based feature space. SPL strateges can learn a weight to indicate the difficulty of feature pixels within a spatial neighborhood. By assigning small weights for unimportant or complex pixels, the negative effect of inhomogeneous or noisy neighboring pixels can be suppressed. Hence, SPKJSR is usually much more robust. Experimental results on Indian Pines and Salinas hyperspectral data sets demonstrate that SPKJSR is much more effective than traditional JSR and KJSR models.


2021 ◽  
Vol 13 (17) ◽  
pp. 3411
Author(s):  
Lanxue Dang ◽  
Peidong Pang ◽  
Xianyu Zuo ◽  
Yang Liu ◽  
Jay Lee

Convolutional neural network (CNN) has shown excellent performance in hyperspectral image (HSI) classification. However, the structure of the CNN models is complex, requiring many training parameters and floating-point operations (FLOPs). This is often inefficient and results in longer training and testing time. In addition, the label samples of hyperspectral data are limited, and a deep network often causes the over-fitting phenomenon. Hence, a dual-path small convolution (DPSC) module is proposed. It is composed of two 1 × 1 small convolutions with a residual path and a density path. It can effectively extract abstract features from HSI. A dual-path small convolution network (DPSCN) is constructed by stacking DPSC modules. Specifically, the proposed model uses a DPSC module to complete the extraction of spectral and spectral–spatial features successively. It then uses a global average pooling layer at the end of the model to replace the conventional fully connected layer to complete the final classification. In the implemented study, all convolutional layers of the proposed network, except the middle layer, use 1 × 1 small convolution, effectively reduced model parameters and increased the speed of feature extraction processes. DPSCN was compared with several current state-of-the-art models. The results on three benchmark HSI data sets demonstrated that the proposed model is of lower complexity, has stronger generalization ability, and has higher classification efficiency.


2019 ◽  
Vol 11 (24) ◽  
pp. 2897 ◽  
Author(s):  
Yuhui Zheng ◽  
Feiyang Wu ◽  
Hiuk Jae Shim ◽  
Le Sun

Hyperspectral unmixing is a key preprocessing technique for hyperspectral image analysis. To further improve the unmixing performance, in this paper, a nonlocal low-rank prior associated with spatial smoothness and spectral collaborative sparsity are integrated together for unmixing the hyperspectral data. The proposed method is based on a fact that hyperspectral images have self-similarity in nonlocal sense and smoothness in local sense. To explore the spatial self-similarity, nonlocal cubic patches are grouped together to compose a low-rank matrix. Then, based on the linear mixed model framework, the nuclear norm is constrained to the abundance matrix of these similar patches to enforce low-rank property. In addition, the local spatial information and spectral characteristic are also taken into account by introducing TV regularization and collaborative sparse terms, respectively. Finally, the results of the experiments on two simulated data sets and two real data sets show that the proposed algorithm produces better performance than other state-of-the-art algorithms.


2020 ◽  
Vol 12 (12) ◽  
pp. 2016 ◽  
Author(s):  
Tao Zhang ◽  
Puzhao Zhang ◽  
Weilin Zhong ◽  
Zhen Yang ◽  
Fan Yang

The traditional local binary pattern (LBP, hereinafter we also call it a two-dimensional local binary pattern 2D-LBP) is unable to depict the spectral characteristics of a hyperspectral image (HSI). To cure this deficiency, this paper develops a joint spectral-spatial 2D-LBP feature (J2D-LBP) by averaging three different 2D-LBP features in a three-dimensional hyperspectral data cube. Subsequently, J2D-LBP is added into the Gabor filter-based deep network (GFDN), and then a novel classification method JL-GFDN is proposed. Different from the original GFDN framework, JL-GFDN further fuses the spectral and spatial features together for HSI classification. Three real data sets are adopted to evaluate the effectiveness of JL-GFDN, and the experimental results verify that (i) JL-GFDN has a better classification accuracy than the original GFDN; (ii) J2D-LBP is more effective in HSI classification in comparison with the traditional 2D-LBP.


Sign in / Sign up

Export Citation Format

Share Document