scholarly journals Robust Automatic Target Recognition Algorithm for Large-Scene SAR Images and Its Adaptability Analysis on Speckle

2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Hongqiao Wang ◽  
Yanning Cai ◽  
Guangyuan Fu ◽  
Shicheng Wang

Aiming at the multiple target recognition problems in large-scene SAR image with strong speckle, a robust full-process method from target detection, feature extraction to target recognition is studied in this paper. By introducing a simple 8-neighborhood orthogonal basis, a local multiscale decomposition method from the center of gravity of the target is presented. Using this method, an image can be processed with a multilevel sampling filter and the target’s multiscale features in eight directions and one low frequency filtering feature can be derived directly by the key pixels sampling. At the same time, a recognition algorithm organically integrating the local multiscale features and the multiscale wavelet kernel classifier is studied, which realizes the quick classification with robustness and high accuracy for multiclass image targets. The results of classification and adaptability analysis on speckle show that the robust algorithm is effective not only for the MSTAR (Moving and Stationary Target Automatic Recognition) target chips but also for the automatic target recognition of multiclass/multitarget in large-scene SAR image with strong speckle; meanwhile, the method has good robustness to target’s rotation and scale transformation.

2021 ◽  
Vol 13 (17) ◽  
pp. 3493
Author(s):  
Jifang Pei ◽  
Zhiyong Wang ◽  
Xueping Sun ◽  
Weibo Huo ◽  
Yin Zhang ◽  
...  

Synthetic aperture radar (SAR) is an advanced microwave imaging system of great importance. The recognition of real-world targets from SAR images, i.e., automatic target recognition (ATR), is an attractive but challenging issue. The majority of existing SAR ATR methods are designed for single-view SAR images. However, multiview SAR images contain more abundant classification information than single-view SAR images, which benefits automatic target classification and recognition. This paper proposes an end-to-end deep feature extraction and fusion network (FEF-Net) that can effectively exploit recognition information from multiview SAR images and can boost the target recognition performance. The proposed FEF-Net is based on a multiple-input network structure with some distinct and useful learning modules, such as deformable convolution and squeeze-and-excitation (SE). Multiview recognition information can be effectively extracted and fused with these modules. Therefore, excellent multiview SAR target recognition performance can be achieved by the proposed FEF-Net. The superiority of the proposed FEF-Net was validated based on experiments with the moving and stationary target acquisition and recognition (MSTAR) dataset.


2021 ◽  
Vol 30 (13) ◽  
Author(s):  
Zhichao Liu ◽  
Baida Qu

For the problem of target recognition of synthetic aperture radar (SAR) images, a method based on the combination of bidimensional empirical mode decomposition (BEMD) and extreme learning machine (ELM) is proposed. BEMD performs feature extraction for SAR images, producing multi-layer bidimensional intrinsic mode functions (BIMF). These BIMFs covey the discrimination of the original target while effectively eliminating the noises. ELM conducts the classification of each BIMF with high efficiency and robustness. Finally, the decisions from different BIMFs are fused using a linear weighting strategy to reach a reliable decision on the target label. The proposed method compensates the relatively low adaptivity of ELM to noise corruption by BEMD feature extraction. Moreover, the multi-layer BIMFs provide more discriminative information for correct decision. Hence, the overall recognition performance can be improved. As an efficient recognition algorithm, the proposed method can be used in an embedded system for wide applications. Experiments are designed and implemented on the moving and stationary target acquisition and recognition (MSTAR) dataset. The proposed method is tested under both the standard operating condition (SOC) and extended operating conditions (EOCs). The results reflect its effectiveness and robustness via quantitative comparisons.


Author(s):  
Yongpeng Tao ◽  
Yu Jing ◽  
Cong Xu

Background: A synthetic aperture radar (SAR) automatic target recognition (ATR) method is proposed in this paper via the joint classification of the target region and shadow. Methods: The elliptical Fourier descriptors (EFDs) are used to describe the target region and shadow extracted from the original SAR image. In addition, the relative positions between the target region and shadow are represented by a constructed feature vector. The three feature vectors complement each other to provide more comprehensive descriptions of the target’s physical properties, e.g., sizes and shape. In the classification stage, the three feature vectors are jointly classified based on the joint sparse representation (JSR). JSR is a multi-task learning algorithm, which can not only represent each component properly but also exploit the inner correlations of different components. Finally, the target type is determined to the class with the minimum reconstruction error. Results: Experiments have been conducted on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset. The proposed method achieves a high recognition accuracy of 96.86% for 10-class recognition problem under the standard operating condition (SOC). Moreover, robustness of the proposed method is also superior over the reference methods under the extended operating conditions (EOCs) like configuration variance, depression angle variance, and noise corruption. Conclusion: Therefore, the effectiveness and robustness of the proposed method can be quantitatively demonstrated by the experimental results.


2019 ◽  
Vol 11 (11) ◽  
pp. 1316 ◽  
Author(s):  
Li Wang ◽  
Xueru Bai ◽  
Feng Zhou

In recent studies, synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms that are based on the convolutional neural network (CNN) have achieved high recognition rates in the moving and stationary target acquisition and recognition (MSTAR) dataset. However, in a SAR ATR task, the feature maps with little information automatically learned by CNN will disturb the classifier. We design a new enhanced squeeze and excitation (enhanced-SE) module to solve this problem, and then propose a new SAR ATR network, i.e., the enhanced squeeze and excitation network (ESENet). When compared to the available CNN structures that are designed for SAR ATR, the ESENet can extract more effective features from SAR images and obtain better generalization performance. In the MSTAR dataset containing pure targets, the proposed method achieves a recognition rate of 97.32% and it exceeds the available CNN-based SAR ATR algorithms. Additionally, it has shown robustness to large depression angle variation, configuration variants, and version variants.


2021 ◽  
Vol 13 (24) ◽  
pp. 5121
Author(s):  
Yu Zhou ◽  
Yi Li ◽  
Weitong Xie ◽  
Lu Li

It is very common to apply convolutional neural networks (CNNs) to synthetic aperture radar (SAR) automatic target recognition (ATR). However, most of the SAR ATR methods using CNN mainly use the image features of SAR images and make little use of the unique electromagnetic scattering characteristics of SAR images. For SAR images, attributed scattering centers (ASCs) reflect the electromagnetic scattering characteristics and the local structures of the target, which are useful for SAR ATR. Therefore, we propose a network to comprehensively use the image features and the features related to ASCs for improving the performance of SAR ATR. There are two branches in the proposed network, one extracts the more discriminative image features from the input SAR image; the other extracts physically meaningful features from the ASC schematic map that reflects the local structure of the target corresponding to each ASC. Finally, the high-level features obtained by the two branches are fused to recognize the target. The experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset prove the capability of the SAR ATR method proposed in this letter.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xiang Chen ◽  
Xing Wang ◽  
You Chen ◽  
Haihan Wang

Synthetic aperture radar (SAR) image target recognition technology is aimed at automatically determining the presence or absence of target information from the input SAR image and improving the efficiency and accuracy of SAR image interpretation. Based on big data analysis, dirty data is removed, clean data is returned, and standardized processing of SAR image data is realized. At the same time, by establishing a statistical model of coherent speckles, the convolutional autoencoder is used to denoise the SAR image. Finally, the network model modified by softmax cross-entropy loss and Fisher loss is used for automatic target recognition. Based on the MSTAR data set, two scene graphs containing the target synthesized by the background image and the target slice are used for experiments. Several comparative experiments have verified the effectiveness of the classification and recognition model in this paper.


2021 ◽  
Vol 13 (9) ◽  
pp. 1772
Author(s):  
Zhenpeng Feng ◽  
Mingzhe Zhu ◽  
Ljubiša Stanković ◽  
Hongbing Ji

Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentation, and target recognition, which greatly reduce the efficiency of data processing. In an era of deep learning, numerous automatic target recognition methods have been proposed based on convolutional neural networks (CNNs) due to their strong capabilities for data abstraction and mining. In contrast to general methods, CNNs own an end-to-end structure where complex data preprocessing is not needed, thus the efficiency can be improved dramatically once a CNN is well trained. However, the recognition mechanism of a CNN is unclear, which hinders its application in many scenarios. In this paper, Self-Matching class activation mapping (CAM) is proposed to visualize what a CNN learns from SAR images to make a decision. Self-Matching CAM assigns a pixel-wise weight matrix to feature maps of different channels by matching them with the input SAR image. By using Self-Matching CAM, the detailed information of the target can be well preserved in an accurate visual explanation heatmap of a CNN for SAR image interpretation. Numerous experiments on a benchmark dataset (MSTAR) verify the validity of Self-Matching CAM.


Sign in / Sign up

Export Citation Format

Share Document