scholarly journals From Local to Global: Class Feature Fused Fully Convolutional Network for Hyperspectral Image Classification

2021 ◽  
Vol 13 (24) ◽  
pp. 5043
Author(s):  
Qian Liu ◽  
Zebin Wu ◽  
Xiuping Jia ◽  
Yang Xu ◽  
Zhihui Wei

Current mainstream networks for hyperspectral image (HSI) classification employ image patches as inputs for feature extraction. Spatial information extraction is limited by the size of inputs, which makes networks unable to perform effective learning and reasoning from the global perspective. As a common component for capturing long-range dependencies, non-local networks with pixel-by-pixel information interaction bring unaffordable computational costs and information redundancy. To address the above issues, we propose a class feature fused fully convolutional network (CFF-FCN) with a local feature extraction block (LFEB) and a class feature fusion block (CFFB) to jointly utilize local and global information. LFEB based on dilated convolutions and reverse loop mechanism can acquire the local spectral–spatial features at multiple levels and deliver shallower layer features for coarse classification. CFFB calculates global class representation to enhance pixel features. Robust global information is propagated to every pixel with low computational cost. CFF-FCN considers a fully global class context and obtains more discriminative representation by concatenating high-level local features and re-integrated global features. Experimental results conducted on three real HSI data sets demonstrate that the proposed fully convolutional network is superior to multiple state-of-the-art deep learning-based approaches, especially in the case of a small number of training samples.

2019 ◽  
Vol 11 (20) ◽  
pp. 2454 ◽  
Author(s):  
Miaomiao Liang ◽  
Licheng Jiao ◽  
Zhe Meng

Filter banks transferred from a pre-trained deep convolutional network exhibit significant performance in heightening the inter-class separability for hyperspectral image feature extraction, but weakening the intra-class consistency simultaneously. In this paper, we propose a new superpixel-based relational auto-encoder for cohesive spectral–spatial feature learning. Firstly, multiscale local spatial information and global semantic features of hyperspectral images are extracted by filter banks transferred from the pre-trained VGG-16. Meanwhile, we utilize superpixel segmentation to construct the low-dimensional manifold embedded in the spectral domain. Then, representational consistency constraint among each superpixel is added in the objective function of sparse auto-encoder, which iteratively assist and supervisedly learn hidden representation of deep spatial feature with greater cohesiveness. Superpixel-based local consistency constraint in this work not only reduces the computational complexity, but builds the neighborhood relationships adaptively. The final feature extraction is accomplished by collaborative encoder of spectral–spatial feature and weighting fusion of multiscale features. A large number of experimental results demonstrate that our proposed method achieves expected results in discriminant feature extraction and has certain advantages over some existing methods, especially on extremely limited sample conditions.


2018 ◽  
Vol 10 (11) ◽  
pp. 1827 ◽  
Author(s):  
Ahram Song ◽  
Jaewan Choi ◽  
Youkyung Han ◽  
Yongil Kim

Hyperspectral change detection (CD) can be effectively performed using deep-learning networks. Although these approaches require qualified training samples, it is difficult to obtain ground-truth data in the real world. Preserving spatial information during training is difficult due to structural limitations. To solve such problems, our study proposed a novel CD method for hyperspectral images (HSIs), including sample generation and a deep-learning network, called the recurrent three-dimensional (3D) fully convolutional network (Re3FCN), which merged the advantages of a 3D fully convolutional network (FCN) and a convolutional long short-term memory (ConvLSTM). Principal component analysis (PCA) and the spectral correlation angle (SCA) were used to generate training samples with high probabilities of being changed or unchanged. The strategy assisted in training fewer samples of representative feature expression. The Re3FCN was mainly comprised of spectral–spatial and temporal modules. Particularly, a spectral–spatial module with a 3D convolutional layer extracts the spectral–spatial features from the HSIs simultaneously, whilst a temporal module with ConvLSTM records and analyzes the multi-temporal HSI change information. The study first proposed a simple and effective method to generate samples for network training. This method can be applied effectively to cases with no training samples. Re3FCN can perform end-to-end detection for binary and multiple changes. Moreover, Re3FCN can receive multi-temporal HSIs directly as input without learning the characteristics of multiple changes. Finally, the network could extract joint spectral–spatial–temporal features and it preserved the spatial structure during the learning process through the fully convolutional structure. This study was the first to use a 3D FCN and a ConvLSTM for the remote-sensing CD. To demonstrate the effectiveness of the proposed CD method, we performed binary and multi-class CD experiments. Results revealed that the Re3FCN outperformed the other conventional methods, such as change vector analysis, iteratively reweighted multivariate alteration detection, PCA-SCA, FCN, and the combination of 2D convolutional layers-fully connected LSTM.


2020 ◽  
Vol 12 (10) ◽  
pp. 1660 ◽  
Author(s):  
Qiang Li ◽  
Qi Wang ◽  
Xuelong Li

Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, there are two main problems in the previous works. One is to use the typical three-dimensional convolution analysis, resulting in more parameters of the network. The other is not to pay more attention to the mining of hyperspectral image spatial information, when the spectral information can be extracted. To address these issues, in this paper, we propose a mixed convolutional network (MCNet) for hyperspectral image super-resolution. We design a novel mixed convolutional module (MCM) to extract the potential features by 2D/3D convolution instead of one convolution, which enables the network to more mine spatial features of hyperspectral image. To explore the effective features from 2D unit, we design the local feature fusion to adaptively analyze from all the hierarchical features in 2D units. In 3D unit, we employ spatial and spectral separable 3D convolution to extract spatial and spectral information, which reduces unaffordable memory usage and training time. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.


Author(s):  
A. Kianisarkaleh ◽  
H. Ghassemian ◽  
F. Razzazi

Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.


2019 ◽  
Vol 13 ◽  
Author(s):  
Jacintha Menezes ◽  
Nagesh Poojary

Background: Hyperspectral (HS) image data comprises of tremendous amount of spatial and spectral information which offers feature identification and classification with high accuracy. As part of the deep learning(DL) framework stacked autoencoders(SAEs) has been successfully applied for deep spectral features extraction in high dimensional data. HS deep image feature extraction becomes complex and time consuming due to the hundreds of spectral bands available in the hypercubes. Methods: The proposed method aims condense the spectral-spatial information through suitable feature extraction and feature selection methods to reduce data dimension to an appropriate scale. Further, the reduced feature set is processed by SAE for final feature representation and classification. Results: The proposed method has resulted in reduced computation time by ~300s and an improvement in classification accuracy by ~15% as compared to uncondensed spectral-spatial features fed directly to SAE network. Conclusion: Future research could explore the combination of most state-of-the art techniques.


2021 ◽  
Vol 13 (17) ◽  
pp. 3396
Author(s):  
Feng Zhao ◽  
Junjie Zhang ◽  
Zhe Meng ◽  
Hanqiang Liu

Recently, with the extensive application of deep learning techniques in the hyperspectral image (HSI) field, particularly convolutional neural network (CNN), the research of HSI classification has stepped into a new stage. To avoid the problem that the receptive field of naive convolution is small, the dilated convolution is introduced into the field of HSI classification. However, the dilated convolution usually generates blind spots in the receptive field, resulting in discontinuous spatial information obtained. In order to solve the above problem, a densely connected pyramidal dilated convolutional network (PDCNet) is proposed in this paper. Firstly, a pyramidal dilated convolutional (PDC) layer integrates different numbers of sub-dilated convolutional layers is proposed, where the dilated factor of the sub-dilated convolution increases exponentially, achieving multi-sacle receptive fields. Secondly, the number of sub-dilated convolutional layers increases in a pyramidal pattern with the depth of the network, thereby capturing more comprehensive hyperspectral information in the receptive field. Furthermore, a feature fusion mechanism combining pixel-by-pixel addition and channel stacking is adopted to extract more abstract spectral–spatial features. Finally, in order to reuse the features of the previous layers more effectively, dense connections are applied in densely pyramidal dilated convolutional (DPDC) blocks. Experiments on three well-known HSI datasets indicate that PDCNet proposed in this paper has good classification performance compared with other popular models.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 204 ◽  
Author(s):  
Chenming Li ◽  
Yongchang Wang ◽  
Xiaoke Zhang ◽  
Hongmin Gao ◽  
Yao Yang ◽  
...  

With the development of high-resolution optical sensors, the classification of ground objects combined with multivariate optical sensors is a hot topic at present. Deep learning methods, such as convolutional neural networks, are applied to feature extraction and classification. In this work, a novel deep belief network (DBN) hyperspectral image classification method based on multivariate optical sensors and stacked by restricted Boltzmann machines is proposed. We introduced the DBN framework to classify spatial hyperspectral sensor data on the basis of DBN. Then, the improved method (combination of spectral and spatial information) was verified. After unsupervised pretraining and supervised fine-tuning, the DBN model could successfully learn features. Additionally, we added a logistic regression layer that could classify the hyperspectral images. Moreover, the proposed training method, which fuses spectral and spatial information, was tested over the Indian Pines and Pavia University datasets. The advantages of this method over traditional methods are as follows: (1) the network has deep structure and the ability of feature extraction is stronger than traditional classifiers; (2) experimental results indicate that our method outperforms traditional classification and other deep learning approaches.


2021 ◽  
Vol 9 ◽  
Author(s):  
Siyu Xia ◽  
Fan Wang ◽  
Fei Xie ◽  
Lei Huang ◽  
Qi Wang ◽  
...  

For ensuring the safety and reliability of electronic equipment, it is a necessary task to detect the surface defects of the printed circuit board (PCB). Due to the smallness, complexity and diversity of minor defects of PCB, it is difficult to identify minor defects in PCB with traditional methods. And the target detection method based on deep learning faces the problem of imbalance between foreground and background when detecting minor defects. Therefore, this paper proposes a minor defect detection method on PCB based on FL-RFCN (focal loss and Region-based Fully Convolutional Network) and PHFE (parallel high-definition feature extraction). Firstly, this paper uses the Region-based Fully Convolutional Network(R-FCN) to identify minor defects on the PCB. Secondly, the focal loss is used to solve the problem of data imbalance in neural networks. Thirdly, the parallel high-definition feature extraction algorithm is used to improve the recognition rate of minor defects. In the detection of minor defects on PCB, the ablation experiment proves that the mean Average accuracy (mAP) of the proposed method is increased by 7.4. In comparative experiments, it is found that the mAP of the method proposed in this paper is 12.3 higher than YOLOv3 and 6.7 higher than Faster R-CNN.


Sign in / Sign up

Export Citation Format

Share Document