scholarly journals CNN-LRP: Understanding Convolutional Neural Networks Performance for Target Recognition in SAR Images

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4536
Author(s):  
Bo Zang ◽  
Linlin Ding ◽  
Zhenpeng Feng ◽  
Mingzhe Zhu ◽  
Tao Lei ◽  
...  

Target recognition is one of the most challenging tasks in synthetic aperture radar (SAR) image processing since it is highly affected by a series of pre-processing techniques which usually require sophisticated manipulation for different data and consume huge calculation resources. To alleviate this limitation, numerous deep-learning based target recognition methods are proposed, particularly combined with convolutional neural network (CNN) due to its strong capability of data abstraction and end-to-end structure. In this case, although complex pre-processing can be avoided, the inner mechanism of CNN is still unclear. Such a “black box” only tells a result but not what CNN learned from the input data, thus it is difficult for researchers to further analyze the causes of errors. Layer-wise relevance propagation (LRP) is a prevalent pixel-level rearrangement algorithm to visualize neural networks’ inner mechanism. LRP is usually applied in sparse auto-encoder with only fully-connected layers rather than CNN, but such network structure usually obtains much lower recognition accuracy than CNN. In this paper, we propose a novel LRP algorithm particularly designed for understanding CNN’s performance on SAR image target recognition. We provide a concise form of the correlation between output of a layer and weights of the next layer in CNNs. The proposed method can provide positive and negative contributions in input SAR images for CNN’s classification, viewed as a clear visual understanding of CNN’s recognition mechanism. Numerous experimental results demonstrate the proposed method outperforms common LRP.

2021 ◽  
Vol 13 (9) ◽  
pp. 1772
Author(s):  
Zhenpeng Feng ◽  
Mingzhe Zhu ◽  
Ljubiša Stanković ◽  
Hongbing Ji

Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentation, and target recognition, which greatly reduce the efficiency of data processing. In an era of deep learning, numerous automatic target recognition methods have been proposed based on convolutional neural networks (CNNs) due to their strong capabilities for data abstraction and mining. In contrast to general methods, CNNs own an end-to-end structure where complex data preprocessing is not needed, thus the efficiency can be improved dramatically once a CNN is well trained. However, the recognition mechanism of a CNN is unclear, which hinders its application in many scenarios. In this paper, Self-Matching class activation mapping (CAM) is proposed to visualize what a CNN learns from SAR images to make a decision. Self-Matching CAM assigns a pixel-wise weight matrix to feature maps of different channels by matching them with the input SAR image. By using Self-Matching CAM, the detailed information of the target can be well preserved in an accurate visual explanation heatmap of a CNN for SAR image interpretation. Numerous experiments on a benchmark dataset (MSTAR) verify the validity of Self-Matching CAM.


2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Hongqiao Wang ◽  
Yanning Cai ◽  
Guangyuan Fu ◽  
Shicheng Wang

Aiming at the multiple target recognition problems in large-scene SAR image with strong speckle, a robust full-process method from target detection, feature extraction to target recognition is studied in this paper. By introducing a simple 8-neighborhood orthogonal basis, a local multiscale decomposition method from the center of gravity of the target is presented. Using this method, an image can be processed with a multilevel sampling filter and the target’s multiscale features in eight directions and one low frequency filtering feature can be derived directly by the key pixels sampling. At the same time, a recognition algorithm organically integrating the local multiscale features and the multiscale wavelet kernel classifier is studied, which realizes the quick classification with robustness and high accuracy for multiclass image targets. The results of classification and adaptability analysis on speckle show that the robust algorithm is effective not only for the MSTAR (Moving and Stationary Target Automatic Recognition) target chips but also for the automatic target recognition of multiclass/multitarget in large-scene SAR image with strong speckle; meanwhile, the method has good robustness to target’s rotation and scale transformation.


2020 ◽  
Vol 12 (16) ◽  
pp. 2636
Author(s):  
Emanuele Dalsasso ◽  
Xiangli Yang ◽  
Loïc Denis ◽  
Florence Tupin ◽  
Wen Yang

Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) images. Many different schemes have been proposed for the restoration of intensity SAR images. Among the different possible approaches, methods based on convolutional neural networks (CNNs) have recently shown to reach state-of-the-art performance for SAR image restoration. CNN training requires good training data: many pairs of speckle-free/speckle-corrupted images. This is an issue in SAR applications, given the inherent scarcity of speckle-free images. To handle this problem, this paper analyzes different strategies one can adopt, depending on the speckle removal task one wishes to perform and the availability of multitemporal stacks of SAR data. The first strategy applies a CNN model, trained to remove additive white Gaussian noise from natural images, to a recently proposed SAR speckle removal framework: MuLoG (MUlti-channel LOgarithm with Gaussian denoising). No training on SAR images is performed, the network is readily applied to speckle reduction tasks. The second strategy considers a novel approach to construct a reliable dataset of speckle-free SAR images necessary to train a CNN model. Finally, a hybrid approach is also analyzed: the CNN used to remove additive white Gaussian noise is trained on speckle-free SAR images. The proposed methods are compared to other state-of-the-art speckle removal filters, to evaluate the quality of denoising and to discuss the pros and cons of the different strategies. Along with the paper, we make available the weights of the trained network to allow its usage by other researchers.


2021 ◽  
Vol 13 (20) ◽  
pp. 4021
Author(s):  
Lan Du ◽  
Lu Li ◽  
Yuchen Guo ◽  
Yan Wang ◽  
Ke Ren ◽  
...  

Usually radar target recognition methods only use a single type of high-resolution radar signal, e.g., high-resolution range profile (HRRP) or synthetic aperture radar (SAR) images. In fact, in the SAR imaging procedure, we can simultaneously obtain both the HRRP data and the corresponding SAR image, as the information contained within them is not exactly the same. Although the information contained in the HRRP data and the SAR image are not exactly the same, both are important for radar target recognition. Therefore, in this paper, we propose a novel end-to-end two stream fusion network to make full use of the different characteristics obtained from modeling HRRP data and SAR images, respectively, for SAR target recognition. The proposed fusion network contains two separated streams in the feature extraction stage, one of which takes advantage of a variational auto-encoder (VAE) network to acquire the latent probabilistic distribution characteristic from the HRRP data, and the other uses a lightweight convolutional neural network, LightNet, to extract the 2D visual structure characteristics based on SAR images. Following the feature extraction stage, a fusion module is utilized to integrate the latent probabilistic distribution characteristic and the structure characteristic for the reflecting target information more comprehensively and sufficiently. The main contribution of the proposed method consists of two parts: (1) different characteristics from the HRRP data and the SAR image can be used effectively for SAR target recognition, and (2) an attention weight vector is used in the fusion module to adaptively integrate the different characteristics from the two sub-networks. The experimental results of our method on the HRRP data and SAR images of the MSTAR and civilian vehicle datasets obtained improvements of at least 0.96 and 2.16%, respectively, on recognition rates, compared with current SAR target recognition methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Wei Wang ◽  
Chengwen Zhang ◽  
Jinge Tian ◽  
Jianping Ou ◽  
Ji Li

With the wide application of high-resolution radar, the application of Radar Automatic Target Recognition (RATR) is increasingly focused on how to quickly and accurately distinguish high-resolution radar targets. Therefore, Synthetic Aperture Radar (SAR) image recognition technology has become one of the research hotspots in this field. Based on the characteristics of SAR images, a Sparse Data Feature Extraction module (SDFE) has been designed, and a new convolutional neural network SSF-Net has been further proposed based on the SDFE module. Meanwhile, in order to improve processing efficiency, the network adopts three methods to classify targets: three Fully Connected (FC) layers, one Fully Connected (FC) layer, and Global Average Pooling (GAP). Among them, the latter two methods have less parameters and computational cost, and they have better real-time performance. The methods were tested on public datasets SAR-SOC and SAR-EOC-1. The experimental results show that the SSF-Net has relatively better robustness and achieves the highest recognition accuracy of 99.55% and 99.50% on SAR-SOC and SAR-EOC-1, respectively, which is 1% higher than the comparison methods on SAR-EOC-1.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1643
Author(s):  
Ming Liu ◽  
Shichao Chen ◽  
Fugang Lu ◽  
Mengdao Xing ◽  
Jingbiao Wei

For target detection in complex scenes of synthetic aperture radar (SAR) images, the false alarms in the land areas are hard to eliminate, especially for the ones near the coastline. Focusing on the problem, an algorithm based on the fusion of multiscale superpixel segmentations is proposed in this paper. Firstly, the SAR images are partitioned by using different scales of superpixel segmentation. For the superpixels in each scale, the land-sea segmentation is achieved by judging their statistical properties. Then, the land-sea segmentation results obtained in each scale are combined with the result of the constant false alarm rate (CFAR) detector to eliminate the false alarms located on the land areas of the SAR image. In the end, to enhance the robustness of the proposed algorithm, the detection results obtained in different scales are fused together to realize the final target detection. Experimental results on real SAR images have verified the effectiveness of the proposed algorithm.


2021 ◽  
Vol 147 ◽  
pp. 115-123
Author(s):  
Yinyin Jiang ◽  
Ming Li ◽  
Peng Zhang ◽  
Xiaofeng Tan ◽  
Wanying Song

Sign in / Sign up

Export Citation Format

Share Document