scholarly journals Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition

2021 ◽  
Vol 13 (20) ◽  
pp. 4021
Author(s):  
Lan Du ◽  
Lu Li ◽  
Yuchen Guo ◽  
Yan Wang ◽  
Ke Ren ◽  
...  

Usually radar target recognition methods only use a single type of high-resolution radar signal, e.g., high-resolution range profile (HRRP) or synthetic aperture radar (SAR) images. In fact, in the SAR imaging procedure, we can simultaneously obtain both the HRRP data and the corresponding SAR image, as the information contained within them is not exactly the same. Although the information contained in the HRRP data and the SAR image are not exactly the same, both are important for radar target recognition. Therefore, in this paper, we propose a novel end-to-end two stream fusion network to make full use of the different characteristics obtained from modeling HRRP data and SAR images, respectively, for SAR target recognition. The proposed fusion network contains two separated streams in the feature extraction stage, one of which takes advantage of a variational auto-encoder (VAE) network to acquire the latent probabilistic distribution characteristic from the HRRP data, and the other uses a lightweight convolutional neural network, LightNet, to extract the 2D visual structure characteristics based on SAR images. Following the feature extraction stage, a fusion module is utilized to integrate the latent probabilistic distribution characteristic and the structure characteristic for the reflecting target information more comprehensively and sufficiently. The main contribution of the proposed method consists of two parts: (1) different characteristics from the HRRP data and the SAR image can be used effectively for SAR target recognition, and (2) an attention weight vector is used in the fusion module to adaptively integrate the different characteristics from the two sub-networks. The experimental results of our method on the HRRP data and SAR images of the MSTAR and civilian vehicle datasets obtained improvements of at least 0.96 and 2.16%, respectively, on recognition rates, compared with current SAR target recognition methods.

2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Chenyu Li ◽  
Guohua Liu

This paper applied block sparse Bayesian learning (BSBL) to synthetic aperture radar (SAR) target recognition. The traditional sparse representation-based classification (SRC) operates on the global dictionary collaborated by different classes. Afterwards, the similarities between the test sample and various classes are evaluated by the reconstruction errors. This paper reconstructs the test sample based on local dictionaries formed by individual classes. Considering the azimuthal sensitivity of SAR images, the linear coefficients on the local dictionary are sparse ones with block structure. Therefore, to solve the sparse coefficients, the BSBL is employed. The proposed method can better exploit the representation capability of each class, thus benefiting the recognition performance. Based on the experimental results on the moving and stationary target acquisition and recognition (MSTAR) dataset, the effectiveness and robustness of the proposed method is confirmed.


IEEE Access ◽  
2017 ◽  
Vol 5 ◽  
pp. 26880-26891 ◽  
Author(s):  
Fan Zhang ◽  
Chen Hu ◽  
Qiang Yin ◽  
Wei Li ◽  
Heng-Chao Li ◽  
...  

2019 ◽  
Vol 2019 (19) ◽  
pp. 5940-5943
Author(s):  
Baozhang Yang ◽  
Jiacheng Ma ◽  
Yesheng Gao ◽  
Lei Liu ◽  
Xingzhao Liu

Sensors ◽  
2017 ◽  
Vol 17 (12) ◽  
pp. 192 ◽  
Author(s):  
Miao Kang ◽  
Kefeng Ji ◽  
Xiangguang Leng ◽  
Xiangwei Xing ◽  
Huanxin Zou

Sign in / Sign up

Export Citation Format

Share Document