Hyperspectral image recognition based on lightweight causal convolutional network

Author(s):  
Qiaoyu Ma ◽  
Yang Liu ◽  
Xintong Wang ◽  
Biao Yuan ◽  
Kai Zhang
2019 ◽  
Vol 11 (19) ◽  
pp. 2220 ◽  
Author(s):  
Ximin Cui ◽  
Ke Zheng ◽  
Lianru Gao ◽  
Bing Zhang ◽  
Dong Yang ◽  
...  

Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straightforward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification.


2018 ◽  
Vol 16 (5) ◽  
Author(s):  
Feng Liang ◽  
Hanhu Liu ◽  
Xiao Wang ◽  
Yanyan Liu

2020 ◽  
Vol 14 (02) ◽  
pp. 1
Author(s):  
Bing Liu ◽  
Kuiliang Gao ◽  
Anzhu Yu ◽  
Wenyue Guo ◽  
Ruirui Wang ◽  
...  

2020 ◽  
Vol 12 (10) ◽  
pp. 1660 ◽  
Author(s):  
Qiang Li ◽  
Qi Wang ◽  
Xuelong Li

Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, there are two main problems in the previous works. One is to use the typical three-dimensional convolution analysis, resulting in more parameters of the network. The other is not to pay more attention to the mining of hyperspectral image spatial information, when the spectral information can be extracted. To address these issues, in this paper, we propose a mixed convolutional network (MCNet) for hyperspectral image super-resolution. We design a novel mixed convolutional module (MCM) to extract the potential features by 2D/3D convolution instead of one convolution, which enables the network to more mine spatial features of hyperspectral image. To explore the effective features from 2D unit, we design the local feature fusion to adaptively analyze from all the hierarchical features in 2D units. In 3D unit, we employ spatial and spectral separable 3D convolution to extract spatial and spectral information, which reduces unaffordable memory usage and training time. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.


2021 ◽  
Vol 13 (24) ◽  
pp. 5043
Author(s):  
Qian Liu ◽  
Zebin Wu ◽  
Xiuping Jia ◽  
Yang Xu ◽  
Zhihui Wei

Current mainstream networks for hyperspectral image (HSI) classification employ image patches as inputs for feature extraction. Spatial information extraction is limited by the size of inputs, which makes networks unable to perform effective learning and reasoning from the global perspective. As a common component for capturing long-range dependencies, non-local networks with pixel-by-pixel information interaction bring unaffordable computational costs and information redundancy. To address the above issues, we propose a class feature fused fully convolutional network (CFF-FCN) with a local feature extraction block (LFEB) and a class feature fusion block (CFFB) to jointly utilize local and global information. LFEB based on dilated convolutions and reverse loop mechanism can acquire the local spectral–spatial features at multiple levels and deliver shallower layer features for coarse classification. CFFB calculates global class representation to enhance pixel features. Robust global information is propagated to every pixel with low computational cost. CFF-FCN considers a fully global class context and obtains more discriminative representation by concatenating high-level local features and re-integrated global features. Experimental results conducted on three real HSI data sets demonstrate that the proposed fully convolutional network is superior to multiple state-of-the-art deep learning-based approaches, especially in the case of a small number of training samples.


Sign in / Sign up

Export Citation Format

Share Document