Hyperspectral image Data Classification with Refined Spectral-Spatial features based on Stacked Autoencoder approach

2019 ◽  
Vol 13 ◽  
Author(s):  
Jacintha Menezes ◽  
Nagesh Poojary

Background: Hyperspectral (HS) image data comprises of tremendous amount of spatial and spectral information which offers feature identification and classification with high accuracy. As part of the deep learning(DL) framework stacked autoencoders(SAEs) has been successfully applied for deep spectral features extraction in high dimensional data. HS deep image feature extraction becomes complex and time consuming due to the hundreds of spectral bands available in the hypercubes. Methods: The proposed method aims condense the spectral-spatial information through suitable feature extraction and feature selection methods to reduce data dimension to an appropriate scale. Further, the reduced feature set is processed by SAE for final feature representation and classification. Results: The proposed method has resulted in reduced computation time by ~300s and an improvement in classification accuracy by ~15% as compared to uncondensed spectral-spatial features fed directly to SAE network. Conclusion: Future research could explore the combination of most state-of-the art techniques.

2019 ◽  
Vol 11 (20) ◽  
pp. 2454 ◽  
Author(s):  
Miaomiao Liang ◽  
Licheng Jiao ◽  
Zhe Meng

Filter banks transferred from a pre-trained deep convolutional network exhibit significant performance in heightening the inter-class separability for hyperspectral image feature extraction, but weakening the intra-class consistency simultaneously. In this paper, we propose a new superpixel-based relational auto-encoder for cohesive spectral–spatial feature learning. Firstly, multiscale local spatial information and global semantic features of hyperspectral images are extracted by filter banks transferred from the pre-trained VGG-16. Meanwhile, we utilize superpixel segmentation to construct the low-dimensional manifold embedded in the spectral domain. Then, representational consistency constraint among each superpixel is added in the objective function of sparse auto-encoder, which iteratively assist and supervisedly learn hidden representation of deep spatial feature with greater cohesiveness. Superpixel-based local consistency constraint in this work not only reduces the computational complexity, but builds the neighborhood relationships adaptively. The final feature extraction is accomplished by collaborative encoder of spectral–spatial feature and weighting fusion of multiscale features. A large number of experimental results demonstrate that our proposed method achieves expected results in discriminant feature extraction and has certain advantages over some existing methods, especially on extremely limited sample conditions.


2019 ◽  
Vol 16 (5) ◽  
pp. 781-785 ◽  
Author(s):  
Zhikun Chen ◽  
Junjun Jiang ◽  
Chong Zhou ◽  
Xinwei Jiang ◽  
Shaoyuan Fu ◽  
...  

2019 ◽  
Vol 11 (19) ◽  
pp. 2220 ◽  
Author(s):  
Ximin Cui ◽  
Ke Zheng ◽  
Lianru Gao ◽  
Bing Zhang ◽  
Dong Yang ◽  
...  

Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straightforward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Tsun-Kuo Lin

This paper developed a principal component analysis (PCA)-integrated algorithm for feature identification in manufacturing; this algorithm is based on an adaptive PCA-based scheme for identifying image features in vision-based inspection. PCA is a commonly used statistical method for pattern recognition tasks, but an effective PCA-based approach for identifying suitable image features in manufacturing has yet to be developed. Unsuitable image features tend to yield poor results when used in conventional visual inspections. Furthermore, research has revealed that the use of unsuitable or redundant features might influence the performance of object detection. To address these problems, the adaptive PCA-based algorithm developed in this study entails the identification of suitable image features using a support vector machine (SVM) model for inspecting of various object images; this approach can be used for solving the inherent problem of detection that occurs when the extraction contains challenging image features in manufacturing processes. The results of experiments indicated that the proposed algorithm can successfully be used to adaptively select appropriate image features. The algorithm combines image feature extraction and PCA/SVM classification to detect patterns in manufacturing. The algorithm was determined to achieve high-performance detection and to outperform the existing methods.


2012 ◽  
Author(s):  
HaiCheng Qu ◽  
Ye Zhang ◽  
Zhouhan Lin ◽  
Hao Chen

2017 ◽  
Vol 20 (4) ◽  
pp. 309-318 ◽  
Author(s):  
Bin Zhao ◽  
Lianru Gao ◽  
Wenzhi Liao ◽  
Bing Zhang

Sign in / Sign up

Export Citation Format

Share Document