scholarly journals Multi-Feature Fusion and Adaptive Kernel Combination for SAR Image Classification

2021 ◽  
Vol 11 (4) ◽  
pp. 1603
Author(s):  
Xiaoying Wu ◽  
Xianbin Wen ◽  
Haixia Xu ◽  
Liming Yuan ◽  
Changlun Guo

Synthetic aperture radar (SAR) image classification is an important task in remote sensing applications. However, it is challenging due to the speckle embedding in SAR imaging, which significantly degrades the classification performance. To address this issue, a new SAR image classification framework based on multi-feature fusion and adaptive kernel combination is proposed in this paper. Expressing pixel similarity by non-negative logarithmic likelihood difference, the generalized neighborhoods are newly defined. The adaptive kernel combination is designed on them to dynamically explore multi-feature information that is robust to speckle noise. Then, local consistency optimization is further applied to enhance label spatial smoothness during classification. By simultaneously utilizing adaptive kernel combination and local consistency optimization for the first time, the texture feature information, context information within features, generalized spatial information between features, and complementary information among features is fully integrated to ensure accurate and smooth classification. Compared with several state-of-the-art methods on synthetic and real SAR images, the proposed method demonstrates better performance in visual effects and classification quality, as the image edges and details are better preserved according to the experimental results.

2021 ◽  
Vol 13 (2) ◽  
pp. 328
Author(s):  
Wenkai Liang ◽  
Yan Wu ◽  
Ming Li ◽  
Yice Cao ◽  
Xin Hu

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.


2021 ◽  
Vol 13 (14) ◽  
pp. 2800
Author(s):  
Yuchen Xie ◽  
Wei Wu ◽  
Haiping Yang ◽  
Ning Wu ◽  
Ying Shen

Pansharpening, which fuses the panchromatic (PAN) band with multispectral (MS) bands to obtain an MS image with spatial resolution of the PAN images, has been a popular topic in remote sensing applications in recent years. Although the deep-learning-based pansharpening algorithm has achieved better performance than traditional methods, the fusion extracts insufficient spatial information from a PAN image, producing low-quality pansharpened images. To address this problem, this paper proposes a novel progressive PAN-injected fusion method based on superresolution (SR). The network extracts the detail features of a PAN image by using two-stream PAN input; uses a feature fusion unit (FFU) to gradually inject low-frequency PAN features, with high-frequency PAN features added after subpixel convolution; uses a plain autoencoder to inject the extracted PAN features; and applies a structural similarity index measure (SSIM) loss to focus on the structural quality. Experiments performed on different datasets indicate that the proposed method outperforms several state-of-the-art pansharpening methods in both visual appearance and objective indexes, and the SSIM loss can help improve the pansharpened quality on the original dataset.


2021 ◽  
Vol 13 (2) ◽  
pp. 271
Author(s):  
Zhensheng Sun ◽  
Miao Liu ◽  
Peng Liu ◽  
Juan Li ◽  
Tao Yu ◽  
...  

As one of the most important active remote sensing technologies, synthetic aperture radar (SAR) provides advanced advantages of all-day, all-weather, and strong penetration capabilities. Due to its unique electromagnetic spectrum and imaging mechanism, the dimensions of remote sensing data have been considerably expanded. Important for fundamental research in microwave remote sensing, SAR image classification has been proven to have great value in many remote sensing applications. Many widely used SAR image classification algorithms rely on the combination of hand-designed features and machine learning classifiers, which still experience many issues that remain to be resolved and overcome, including optimized feature representation, the fuzzy confusion of speckle noise, the widespread applicability, and so on. To mitigate some of the issues and to improve the pattern recognition of high-resolution SAR images, a ConvCRF model combined with superpixel boundary constraint is developed. The proposed algorithm can successfully combine the local and global advantages of fully connected conditional random fields and deep models. An optimizing strategy using a superpixel boundary constraint in the inference iterations more efficiently preserves structure details. The experimental results demonstrate that the proposed method provides competitive advantages over other widely used models. In the land cover classification experiments using the MSTAR, E-SAR and GF-3 datasets, the overall accuracy of our proposed method achieves 90.18 ± 0.37, 91.63 ± 0.27, and 90.91 ± 0.31, respectively. Regarding the issues of SAR image classification, a novel integrated learning containing local and global image features can bring practical implications.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yanling Han ◽  
Pengxia Cui ◽  
Yun Zhang ◽  
Ruyan Zhou ◽  
Shuhu Yang ◽  
...  

Sea ice disasters are already one of the most serious marine disasters in the Bohai Sea region of our country, which have seriously affected the coastal economic development and residents’ lives. Sea ice classification is an important part of sea ice detection. Hyperspectral imagery and multispectral imagery contain rich spectral information and spatial information and provide important data support for sea ice classification. At present, most sea ice classification methods mainly focus on shallow learning based on spectral features, and the good performance of the deep learning method in remote sensing image classification provides a new idea for sea ice classification. However, the level of deep learning is limited due to the influence of input size in sea ice image classification, and the deep features in the image cannot be fully mined, which affects the further improvement of sea ice classification accuracy. Therefore, this paper proposes an image classification method based on multilevel feature fusion using residual network. First, the PCA method is used to extract the first principal component of the original image, and the residual network is used to deepen the number of network layers. The FPN, PAN, and SPP modules increase the mining between layer and layer features and merge the features between different layers to further improve the accuracy of sea ice classification. In order to verify the effectiveness of the method in this paper, sea ice classification experiments were performed on the hyperspectral image of Bohai Bay in 2008 and the multispectral image of Bohai Bay in 2020. The experimental results show that compared with the algorithm with fewer layers of deep learning network, the method proposed in this paper utilizes the idea of residual network to deepen the number of network layers and carries out multilevel feature fusion through FPN, PAN, and SPP modules, which effectively solves the problem of insufficient deep feature extraction and obtains better classification performance.


Sign in / Sign up

Export Citation Format

Share Document