scholarly journals A SAR Target Recognition Method via Combination of Multilevel Deep Features

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Junhua Wang ◽  
Yuan Jiang

For the problem of synthetic aperture radar (SAR) image target recognition, a method via combination of multilevel deep features is proposed. The residual network (ResNet) is used to learn the multilevel deep features of SAR images. Based on the similarity measure, the multilevel deep features are clustered and several feature sets are obtained. Then, each feature set is characterized and classified by the joint sparse representation (JSR), and the corresponding output result is obtained. Finally, the results of different feature sets are combined using the weighted fusion to obtain the target recognition results. The proposed method in this paper can effectively combine the advantages of ResNet and JSR in feature extraction and classification and improve the overall recognition performance. Experiments and analysis are carried out on the MSTAR dataset with rich samples. The results show that the proposed method can achieve superior performance for 10 types of target samples under the standard operating condition (SOC), noise interference, and occlusion conditions, which verifies its effectiveness.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Lin Chen ◽  
Peng Zhan ◽  
Luhui Cao ◽  
Xueqing Li

A multiview synthetic aperture radar (SAR) target recognition with discrimination and correlation analysis is proposed in this study. The multiple views are first prescreened by a support vector machine (SVM) to select out those highly discriminative ones. These views are then clustered into several view sets, in which images share high correlations. The joint sparse representation (JSR) is adopted to classify SAR images in each view set, and all the decisions from different view sets are fused using a linear weighting strategy. The proposed method makes more sufficient analysis of the multiview SAR images so the recognition performance can be effectively enhanced. To test the proposed method, experiments are set up based on the moving and stationary target acquisition and recognition (MSTAR) dataset. The results show that the proposed method could achieve superior performance under different situations over some compared methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Lei Lei ◽  
Dongen Guo ◽  
Zhihui Feng

This paper proposes a synthetic aperture radar (SAR) image target recognition method using multiple views and inner correlation analysis. Due to the azimuth sensitivity of SAR images, the inner correlation between multiview images participating in recognition is not stable enough. To this end, the proposed method first clusters multiview SAR images based on image correlation and nonlinear correlation information entropy (NCIE) in order to obtain multiple view sets with strong internal correlations. For each view set, the multitask sparse representation is used to reconstruct the SAR images in it to obtain high-precision reconstructions. Finally, the linear weighting method is used to fuse the reconstruction errors from different view sets and the target category is determined according to the fusion error. In the experiment, the tests are conducted based on the MSTAR dataset, and the results validate the effectiveness of the proposed method.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Chenyu Li ◽  
Guohua Liu

This paper applied block sparse Bayesian learning (BSBL) to synthetic aperture radar (SAR) target recognition. The traditional sparse representation-based classification (SRC) operates on the global dictionary collaborated by different classes. Afterwards, the similarities between the test sample and various classes are evaluated by the reconstruction errors. This paper reconstructs the test sample based on local dictionaries formed by individual classes. Considering the azimuthal sensitivity of SAR images, the linear coefficients on the local dictionary are sparse ones with block structure. Therefore, to solve the sparse coefficients, the BSBL is employed. The proposed method can better exploit the representation capability of each class, thus benefiting the recognition performance. Based on the experimental results on the moving and stationary target acquisition and recognition (MSTAR) dataset, the effectiveness and robustness of the proposed method is confirmed.


2019 ◽  
Vol 11 (8) ◽  
pp. 906 ◽  
Author(s):  
Zongyong Cui ◽  
Cui Tang ◽  
Zongjie Cao ◽  
Nengyuan Liu

Automatic target recognition (ATR) can obtain important information for target surveillance from Synthetic Aperture Radar (SAR) images. Thus, a direct automatic target recognition (D-ATR) method, based on a deep neural network (DNN), is proposed in this paper. To recognize targets in large-scene SAR images, the traditional methods of SAR ATR are comprised of four major steps: detection, discrimination, feature extraction, and classification. However, the recognition performance is sensitive to each step, as the processing result from each step will affect the following step. Meanwhile, these processes are independent, which means that there is still room for processing speed improvement. The proposed D-ATR method can integrate these steps as a whole system and directly recognize targets in large-scene SAR images, by encapsulating all of the computation in a single deep convolutional neural network (DCNN). Before the DCNN, a fast sliding method is proposed to partition the large image into sub-images, to avoid information loss when resizing the input images, and to avoid the target being divided into several parts. After the DCNN, non-maximum suppression between sub-images (NMSS) is performed on the results of the sub-images, to obtain an accurate result of the large-scene SAR image. Experiments on the MSTAR dataset and large-scene SAR images (with resolution 1478 × 1784) show that the proposed method can obtain a high accuracy and fast processing speed, and out-performs other methods, such as CFAR+SVM, Region-based CNN, and YOLOv2.


2021 ◽  
Vol 13 (20) ◽  
pp. 4021
Author(s):  
Lan Du ◽  
Lu Li ◽  
Yuchen Guo ◽  
Yan Wang ◽  
Ke Ren ◽  
...  

Usually radar target recognition methods only use a single type of high-resolution radar signal, e.g., high-resolution range profile (HRRP) or synthetic aperture radar (SAR) images. In fact, in the SAR imaging procedure, we can simultaneously obtain both the HRRP data and the corresponding SAR image, as the information contained within them is not exactly the same. Although the information contained in the HRRP data and the SAR image are not exactly the same, both are important for radar target recognition. Therefore, in this paper, we propose a novel end-to-end two stream fusion network to make full use of the different characteristics obtained from modeling HRRP data and SAR images, respectively, for SAR target recognition. The proposed fusion network contains two separated streams in the feature extraction stage, one of which takes advantage of a variational auto-encoder (VAE) network to acquire the latent probabilistic distribution characteristic from the HRRP data, and the other uses a lightweight convolutional neural network, LightNet, to extract the 2D visual structure characteristics based on SAR images. Following the feature extraction stage, a fusion module is utilized to integrate the latent probabilistic distribution characteristic and the structure characteristic for the reflecting target information more comprehensively and sufficiently. The main contribution of the proposed method consists of two parts: (1) different characteristics from the HRRP data and the SAR image can be used effectively for SAR target recognition, and (2) an attention weight vector is used in the fusion module to adaptively integrate the different characteristics from the two sub-networks. The experimental results of our method on the HRRP data and SAR images of the MSTAR and civilian vehicle datasets obtained improvements of at least 0.96 and 2.16%, respectively, on recognition rates, compared with current SAR target recognition methods.


2021 ◽  
Vol 13 (17) ◽  
pp. 3493
Author(s):  
Jifang Pei ◽  
Zhiyong Wang ◽  
Xueping Sun ◽  
Weibo Huo ◽  
Yin Zhang ◽  
...  

Synthetic aperture radar (SAR) is an advanced microwave imaging system of great importance. The recognition of real-world targets from SAR images, i.e., automatic target recognition (ATR), is an attractive but challenging issue. The majority of existing SAR ATR methods are designed for single-view SAR images. However, multiview SAR images contain more abundant classification information than single-view SAR images, which benefits automatic target classification and recognition. This paper proposes an end-to-end deep feature extraction and fusion network (FEF-Net) that can effectively exploit recognition information from multiview SAR images and can boost the target recognition performance. The proposed FEF-Net is based on a multiple-input network structure with some distinct and useful learning modules, such as deformable convolution and squeeze-and-excitation (SE). Multiview recognition information can be effectively extracted and fused with these modules. Therefore, excellent multiview SAR target recognition performance can be achieved by the proposed FEF-Net. The superiority of the proposed FEF-Net was validated based on experiments with the moving and stationary target acquisition and recognition (MSTAR) dataset.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Zhengwu Lu ◽  
Guosong Jiang ◽  
Yurong Guan ◽  
Qingdong Wang ◽  
Jianbo Wu

A synthetic aperture radar (SAR) target recognition method combining multiple features and multiple classifiers is proposed. The Zernike moments, kernel principal component analysis (KPCA), and monographic signals are used to describe SAR image features. The three types of features describe SAR target geometric shape features, projection features, and image decomposition features. Their combined use can effectively enhance the description of the target. In the classification stage, the support vector machine (SVM), sparse representation-based classification (SRC), and joint sparse representation (JSR) are used as the classifiers for the three types of features, respectively, and the corresponding decision variables are obtained. For the decision variables of the three types of features, multiple sets of weight vectors are used for weighted fusion to determine the target label of the test sample. In the experiment, based on the MSTAR dataset, experiments are performed under standard operating condition (SOC) and extended operating conditions (EOCs). The experimental results verify the effectiveness, robustness, and adaptability of the proposed method.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2008
Author(s):  
Lu ◽  
Zhang ◽  
Xu ◽  
Lin ◽  
Huo

A novel satellite target recognition method based on radar data partition and deep learning techniques is proposed in this paper. For the radar satellite recognition task, orbital altitude is introduced as a distinct and accessible feature to divide radar data. On this basis, we design a new distance metric for HRRPs called normalized angular distance divided by correlation coefficient (NADDCC), and a hierarchical clustering method based on this distance metric is applied to segment the radar observation angular domain. Using the above technology, the radar data partition is completed and multiple HRRP data clusters are obtained. To further mine the essential features in HRRPs, a GRU-SVM model is designed and firstly applied for radar HRRP target recognition. It consists of a multi-layer GRU neural network as a deep feature extractor and linear SVM as a classifier. By training, GRU neural network successfully extracts effective and highly distinguishable features of HRRPs, and feature visualization technology shows its advantages. Furthermore, the performance testing and comparison experiments also demonstrate that GRU neural network possesses better comprehensive performance for HRRP target recognition than LSTM neural network and conventional RNN, and the recognition performance of our method is almost better than that of other several common feature extraction methods or no data partition.


Sign in / Sign up

Export Citation Format

Share Document