Iterative Random Training Sample Selection for Hyperspectral Image Classification

Author(s):  
Chia-Chen Liang ◽  
Yi-Mei Kuo ◽  
Kenneth Yeonkong Ma ◽  
Peter F. Hu ◽  
Chein-I Chang
2019 ◽  
Vol 11 (17) ◽  
pp. 2057 ◽  
Author(s):  
Majid Shadman Roodposhti ◽  
Arko Lucieer ◽  
Asim Anees ◽  
Brett Bryan

This paper assesses the performance of DoTRules—a dictionary of trusted rules—as a supervised rule-based ensemble framework based on the mean-shift segmentation for hyperspectral image classification. The proposed ensemble framework consists of multiple rule sets with rules constructed based on different class frequencies and sequences of occurrences. Shannon entropy was derived for assessing the uncertainty of every rule and the subsequent filtering of unreliable rules. DoTRules is not only a transparent approach for image classification but also a tool to map rule uncertainty, where rule uncertainty assessment can be applied as an estimate of classification accuracy prior to image classification. In this research, the proposed image classification framework is implemented using three world reference hyperspectral image datasets. We found that the overall accuracy of classification using the proposed ensemble framework was superior to state-of-the-art ensemble algorithms, as well as two non-ensemble algorithms, at multiple training sample sizes. We believe DoTRules can be applied more generally to the classification of discrete data such as hyperspectral satellite imagery products.


2019 ◽  
Vol 57 (11) ◽  
pp. 8394-8416 ◽  
Author(s):  
Meiping Song ◽  
Xiaodi Shang ◽  
Yulei Wang ◽  
Chunyan Yu ◽  
Chein-I Chang

2020 ◽  
Vol 12 (20) ◽  
pp. 3342
Author(s):  
Haoyang Yu ◽  
Xiao Zhang ◽  
Meiping Song ◽  
Jiaochan Hu ◽  
Qiandong Guo ◽  
...  

Sparse representation (SR)-based models have been widely applied for hyperspectral image classification. In our previously established constraint representation (CR) model, we exploited the underlying significance of the sparse coefficient and proposed the participation degree (PD) to represent the contribution of the training sample in representing the testing pixel. However, the spatial variants of the original residual error-driven frameworks often suffer the obstacles to optimization due to the strong constraints. In this paper, based on the object-based image classification (OBIC) framework, we firstly propose a spectral–spatial classification method, called superpixel-level constraint representation (SPCR). Firstly, it uses the PD in respect to the sparse coefficient from CR model. Then, transforming the individual PD to a united activity degree (UAD)-driven mechanism via a spatial constraint generated by the superpixel segmentation algorithm. The final classification is determined based on the UAD-driven mechanism. Considering that the SPCR is susceptible to the segmentation scale, an improved multiscale superpixel-level constraint representation (MSPCR) is further proposed through the decision fusion process of SPCR at different scales. The SPCR method is firstly performed at each scale, and the final category of the testing pixel is determined by the maximum number of the predicated labels among the classification results at each scale. Experimental results on four real hyperspectral datasets including a GF-5 satellite data verified the efficiency and practicability of the two proposed methods.


Sign in / Sign up

Export Citation Format

Share Document