scholarly journals SAR ATR of Ground Vehicles Based on ESENet

2019 ◽  
Vol 11 (11) ◽  
pp. 1316 ◽  
Author(s):  
Li Wang ◽  
Xueru Bai ◽  
Feng Zhou

In recent studies, synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms that are based on the convolutional neural network (CNN) have achieved high recognition rates in the moving and stationary target acquisition and recognition (MSTAR) dataset. However, in a SAR ATR task, the feature maps with little information automatically learned by CNN will disturb the classifier. We design a new enhanced squeeze and excitation (enhanced-SE) module to solve this problem, and then propose a new SAR ATR network, i.e., the enhanced squeeze and excitation network (ESENet). When compared to the available CNN structures that are designed for SAR ATR, the ESENet can extract more effective features from SAR images and obtain better generalization performance. In the MSTAR dataset containing pure targets, the proposed method achieves a recognition rate of 97.32% and it exceeds the available CNN-based SAR ATR algorithms. Additionally, it has shown robustness to large depression angle variation, configuration variants, and version variants.

2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Hongqiao Wang ◽  
Yanning Cai ◽  
Guangyuan Fu ◽  
Shicheng Wang

Aiming at the multiple target recognition problems in large-scene SAR image with strong speckle, a robust full-process method from target detection, feature extraction to target recognition is studied in this paper. By introducing a simple 8-neighborhood orthogonal basis, a local multiscale decomposition method from the center of gravity of the target is presented. Using this method, an image can be processed with a multilevel sampling filter and the target’s multiscale features in eight directions and one low frequency filtering feature can be derived directly by the key pixels sampling. At the same time, a recognition algorithm organically integrating the local multiscale features and the multiscale wavelet kernel classifier is studied, which realizes the quick classification with robustness and high accuracy for multiclass image targets. The results of classification and adaptability analysis on speckle show that the robust algorithm is effective not only for the MSTAR (Moving and Stationary Target Automatic Recognition) target chips but also for the automatic target recognition of multiclass/multitarget in large-scene SAR image with strong speckle; meanwhile, the method has good robustness to target’s rotation and scale transformation.


2017 ◽  
Vol 2017 ◽  
pp. 1-18 ◽  
Author(s):  
Xiaohui Zhao ◽  
Yicheng Jiang ◽  
Tania Stathaki

A strategy is introduced for achieving high accuracy in synthetic aperture radar (SAR) automatic target recognition (ATR) tasks. Initially, a novel pose rectification process and an image normalization process are sequentially introduced to produce images with less variations prior to the feature processing stage. Then, feature sets that have a wealth of texture and edge information are extracted with the utilization of wavelet coefficients, where more effective and compact feature sets are acquired by reducing the redundancy and dimensionality of the extracted feature set. Finally, a group of discrimination trees are learned and combined into a final classifier in the framework of Real-AdaBoost. The proposed method is evaluated with the public release database for moving and stationary target acquisition and recognition (MSTAR). Several comparative studies are conducted to evaluate the effectiveness of the proposed algorithm. Experimental results show the distinctive superiority of the proposed method under both standard operating conditions (SOCs) and extended operating conditions (EOCs). Moreover, our additional tests suggest that good recognition accuracy can be achieved even with limited number of training images as long as these are captured with appropriately incremental sample step in target poses.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Chenyu Li ◽  
Guohua Liu

This paper applied block sparse Bayesian learning (BSBL) to synthetic aperture radar (SAR) target recognition. The traditional sparse representation-based classification (SRC) operates on the global dictionary collaborated by different classes. Afterwards, the similarities between the test sample and various classes are evaluated by the reconstruction errors. This paper reconstructs the test sample based on local dictionaries formed by individual classes. Considering the azimuthal sensitivity of SAR images, the linear coefficients on the local dictionary are sparse ones with block structure. Therefore, to solve the sparse coefficients, the BSBL is employed. The proposed method can better exploit the representation capability of each class, thus benefiting the recognition performance. Based on the experimental results on the moving and stationary target acquisition and recognition (MSTAR) dataset, the effectiveness and robustness of the proposed method is confirmed.


2021 ◽  
Vol 13 (17) ◽  
pp. 3493
Author(s):  
Jifang Pei ◽  
Zhiyong Wang ◽  
Xueping Sun ◽  
Weibo Huo ◽  
Yin Zhang ◽  
...  

Synthetic aperture radar (SAR) is an advanced microwave imaging system of great importance. The recognition of real-world targets from SAR images, i.e., automatic target recognition (ATR), is an attractive but challenging issue. The majority of existing SAR ATR methods are designed for single-view SAR images. However, multiview SAR images contain more abundant classification information than single-view SAR images, which benefits automatic target classification and recognition. This paper proposes an end-to-end deep feature extraction and fusion network (FEF-Net) that can effectively exploit recognition information from multiview SAR images and can boost the target recognition performance. The proposed FEF-Net is based on a multiple-input network structure with some distinct and useful learning modules, such as deformable convolution and squeeze-and-excitation (SE). Multiview recognition information can be effectively extracted and fused with these modules. Therefore, excellent multiview SAR target recognition performance can be achieved by the proposed FEF-Net. The superiority of the proposed FEF-Net was validated based on experiments with the moving and stationary target acquisition and recognition (MSTAR) dataset.


2021 ◽  
Vol 13 (19) ◽  
pp. 3864
Author(s):  
Changjie Cao ◽  
Zongyong Cui ◽  
Zongjie Cao ◽  
Liying Wang ◽  
Jianyu Yang

Although automatic target recognition (ATR) models based on data-driven algorithms have achieved excellent performance in recent years, the synthetic aperture radar (SAR) ATR model often suffered from performance degradation when it encountered a small sample set. In this paper, an integrated counterfactual sample generation and filtering approach is proposed to alleviate the negative influence of a small sample set. The proposed method consists of a generation component and a filtering component. First, the proposed generation component utilizes the overfitting characteristics of generative adversarial networks (GANs), which ensures the generation of counterfactual target samples. Second, the proposed filtering component is built by learning different recognition functions. In the proposed filtering component, multiple SVMs trained by different SAR target sample sets provide pseudo-labels to the other SVMs to improve the recognition rate. Then, the proposed approach improves the performance of the recognition model dynamically while it continuously generates counterfactual target samples. At the same time, counterfactual target samples that are beneficial to the ATR model are also filtered. Moreover, ablation experiments demonstrate the effectiveness of the various components of the proposed method. Experimental results based on the Moving and Stationary Target Acquisition and Recognition (MSTAR) and OpenSARship dataset also show the advantages of the proposed approach. Even though the size of the constructed training set was 14.5% of the original training set, the recognition performance of the ATR model reached 91.27% with the proposed approach.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3218
Author(s):  
Mohamed Touafria ◽  
Qiang Yang

This article discusses the issue of Automatic Target Recognition (ATR) on Synthetic Aperture Radar (SAR) images. Through learning the hierarchy of features automatically from a massive amount of training data, learning networks such as Convolutional Neural Networks (CNN) has recently achieved state-of-the-art results in many tasks. To extract better features about SAR targets, and to obtain better accuracies, a new framework is proposed: First, three CNN models based on different convolution and pooling kernel sizes are proposed. Second, they are applied simultaneously on the SAR images to generate image features via extracting CNN features from different layers in two scenarios. In the first scenario, the activation vectors obtained from fully connected layers are considered as the final image features; in the second scenario, dense features are extracted from the last convolutional layer and then encoded into global image features through one of the commonly used feature coding approaches, which is Fisher Vectors (FVs). Finally, different combination and fusion approaches between the two sets of experiments are considered to construct the final representation of the SAR images for final classification. Extensive experiments on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset are conducted. Experimental results prove the capability of the proposed method, as compared to several state-of-the-art methods.


2021 ◽  
Vol 13 (24) ◽  
pp. 5121
Author(s):  
Yu Zhou ◽  
Yi Li ◽  
Weitong Xie ◽  
Lu Li

It is very common to apply convolutional neural networks (CNNs) to synthetic aperture radar (SAR) automatic target recognition (ATR). However, most of the SAR ATR methods using CNN mainly use the image features of SAR images and make little use of the unique electromagnetic scattering characteristics of SAR images. For SAR images, attributed scattering centers (ASCs) reflect the electromagnetic scattering characteristics and the local structures of the target, which are useful for SAR ATR. Therefore, we propose a network to comprehensively use the image features and the features related to ASCs for improving the performance of SAR ATR. There are two branches in the proposed network, one extracts the more discriminative image features from the input SAR image; the other extracts physically meaningful features from the ASC schematic map that reflects the local structure of the target corresponding to each ASC. Finally, the high-level features obtained by the two branches are fused to recognize the target. The experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset prove the capability of the SAR ATR method proposed in this letter.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3019 ◽  
Author(s):  
Jian Tan ◽  
Xiangtao Fan ◽  
Shenghua Wang ◽  
Yingchao Ren

A target recognition method of synthetic aperture radar (SAR) images is proposed via matching attributed scattering centers (ASCs) to binary target regions. The ASCs extracted from the test image are predicted as binary regions. In detail, each ASC is first transformed to the image domain based on the ASC model. Afterwards, the resulting image is converted to a binary region segmented by a global threshold. All the predicted binary regions of individual ASCs from the test sample are mapped to the binary target regions of the corresponding templates. Then, the matched regions are evaluated by three scores which are combined as a similarity measure via the score-level fusion. In the classification stage, the target label of the test sample is determined according to the fused similarities. The proposed region matching method avoids the conventional ASC matching problem, which involves the assignment of ASC sets. In addition, the predicted regions are more robust than the point features. The Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset is used for performance evaluation in the experiments. According to the experimental results, the method in this study outperforms some traditional methods reported in the literature under several different operating conditions. Under the standard operating condition (SOC), the proposed method achieves very good performance, with an average recognition rate of 98.34%, which is higher than the traditional methods. Moreover, the robustness of the proposed method is also superior to the traditional methods under different extended operating conditions (EOCs), including configuration variants, large depression angle variation, noise contamination, and partial occlusion.


2021 ◽  
Vol 13 (9) ◽  
pp. 1772
Author(s):  
Zhenpeng Feng ◽  
Mingzhe Zhu ◽  
Ljubiša Stanković ◽  
Hongbing Ji

Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentation, and target recognition, which greatly reduce the efficiency of data processing. In an era of deep learning, numerous automatic target recognition methods have been proposed based on convolutional neural networks (CNNs) due to their strong capabilities for data abstraction and mining. In contrast to general methods, CNNs own an end-to-end structure where complex data preprocessing is not needed, thus the efficiency can be improved dramatically once a CNN is well trained. However, the recognition mechanism of a CNN is unclear, which hinders its application in many scenarios. In this paper, Self-Matching class activation mapping (CAM) is proposed to visualize what a CNN learns from SAR images to make a decision. Self-Matching CAM assigns a pixel-wise weight matrix to feature maps of different channels by matching them with the input SAR image. By using Self-Matching CAM, the detailed information of the target can be well preserved in an accurate visual explanation heatmap of a CNN for SAR image interpretation. Numerous experiments on a benchmark dataset (MSTAR) verify the validity of Self-Matching CAM.


Sign in / Sign up

Export Citation Format

Share Document