scholarly journals A Concurrent and Hierarchy Target Learning Architecture for Classification in SAR Application

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3218
Author(s):  
Mohamed Touafria ◽  
Qiang Yang

This article discusses the issue of Automatic Target Recognition (ATR) on Synthetic Aperture Radar (SAR) images. Through learning the hierarchy of features automatically from a massive amount of training data, learning networks such as Convolutional Neural Networks (CNN) has recently achieved state-of-the-art results in many tasks. To extract better features about SAR targets, and to obtain better accuracies, a new framework is proposed: First, three CNN models based on different convolution and pooling kernel sizes are proposed. Second, they are applied simultaneously on the SAR images to generate image features via extracting CNN features from different layers in two scenarios. In the first scenario, the activation vectors obtained from fully connected layers are considered as the final image features; in the second scenario, dense features are extracted from the last convolutional layer and then encoded into global image features through one of the commonly used feature coding approaches, which is Fisher Vectors (FVs). Finally, different combination and fusion approaches between the two sets of experiments are considered to construct the final representation of the SAR images for final classification. Extensive experiments on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset are conducted. Experimental results prove the capability of the proposed method, as compared to several state-of-the-art methods.

2021 ◽  
Vol 13 (24) ◽  
pp. 5121
Author(s):  
Yu Zhou ◽  
Yi Li ◽  
Weitong Xie ◽  
Lu Li

It is very common to apply convolutional neural networks (CNNs) to synthetic aperture radar (SAR) automatic target recognition (ATR). However, most of the SAR ATR methods using CNN mainly use the image features of SAR images and make little use of the unique electromagnetic scattering characteristics of SAR images. For SAR images, attributed scattering centers (ASCs) reflect the electromagnetic scattering characteristics and the local structures of the target, which are useful for SAR ATR. Therefore, we propose a network to comprehensively use the image features and the features related to ASCs for improving the performance of SAR ATR. There are two branches in the proposed network, one extracts the more discriminative image features from the input SAR image; the other extracts physically meaningful features from the ASC schematic map that reflects the local structure of the target corresponding to each ASC. Finally, the high-level features obtained by the two branches are fused to recognize the target. The experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset prove the capability of the SAR ATR method proposed in this letter.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3535
Author(s):  
Ming Liu ◽  
Shichao Chen ◽  
Fugang Lu ◽  
Mengdao Xing

Sparse representation (SR) has been verified to be an effective tool for pattern recognition. Considering the multiplicative speckle noise in synthetic aperture radar (SAR) images, a product sparse representation (PSR) algorithm is proposed to achieve SAR target configuration recognition. To extract the essential characteristics of SAR images, the product model is utilized to describe SAR images. The advantages of sparse representation and the product model are combined to realize a more accurate sparse representation of the SAR image. Moreover, in order to weaken the influences of the speckle noise on recognition, the speckle noise of SAR images is modeled by the Gamma distribution, and the sparse vector of the SAR image is obtained from q statistical standpoint. Experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) database. The experimental results validate the effectiveness and robustness of the proposed algorithm, which can achieve higher recognition rates than some of the state-of-the-art algorithms under different circumstances.


2019 ◽  
Vol 11 (11) ◽  
pp. 1316 ◽  
Author(s):  
Li Wang ◽  
Xueru Bai ◽  
Feng Zhou

In recent studies, synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms that are based on the convolutional neural network (CNN) have achieved high recognition rates in the moving and stationary target acquisition and recognition (MSTAR) dataset. However, in a SAR ATR task, the feature maps with little information automatically learned by CNN will disturb the classifier. We design a new enhanced squeeze and excitation (enhanced-SE) module to solve this problem, and then propose a new SAR ATR network, i.e., the enhanced squeeze and excitation network (ESENet). When compared to the available CNN structures that are designed for SAR ATR, the ESENet can extract more effective features from SAR images and obtain better generalization performance. In the MSTAR dataset containing pure targets, the proposed method achieves a recognition rate of 97.32% and it exceeds the available CNN-based SAR ATR algorithms. Additionally, it has shown robustness to large depression angle variation, configuration variants, and version variants.


2021 ◽  
Vol 13 (7) ◽  
pp. 1236
Author(s):  
Yuanjun Shu ◽  
Wei Li ◽  
Menglong Yang ◽  
Peng Cheng ◽  
Songchen Han

Convolutional neural networks (CNNs) have been widely used in change detection of synthetic aperture radar (SAR) images and have been proven to have better precision than traditional methods. A two-stage patch-based deep learning method with a label updating strategy is proposed in this paper. The initial label and mask are generated at the pre-classification stage. Then a two-stage updating strategy is applied to gradually recover changed areas. At the first stage, diversity of training data is gradually restored. The output of the designed CNN network is further processed to generate a new label and a new mask for the following learning iteration. As the diversity of data is ensured after the first stage, pixels within uncertain areas can be easily classified at the second stage. Experiment results on several representative datasets show the effectiveness of our proposed method compared with several existing competitive methods.


2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Hongqiao Wang ◽  
Yanning Cai ◽  
Guangyuan Fu ◽  
Shicheng Wang

Aiming at the multiple target recognition problems in large-scene SAR image with strong speckle, a robust full-process method from target detection, feature extraction to target recognition is studied in this paper. By introducing a simple 8-neighborhood orthogonal basis, a local multiscale decomposition method from the center of gravity of the target is presented. Using this method, an image can be processed with a multilevel sampling filter and the target’s multiscale features in eight directions and one low frequency filtering feature can be derived directly by the key pixels sampling. At the same time, a recognition algorithm organically integrating the local multiscale features and the multiscale wavelet kernel classifier is studied, which realizes the quick classification with robustness and high accuracy for multiclass image targets. The results of classification and adaptability analysis on speckle show that the robust algorithm is effective not only for the MSTAR (Moving and Stationary Target Automatic Recognition) target chips but also for the automatic target recognition of multiclass/multitarget in large-scene SAR image with strong speckle; meanwhile, the method has good robustness to target’s rotation and scale transformation.


2020 ◽  
Vol 12 (16) ◽  
pp. 2636
Author(s):  
Emanuele Dalsasso ◽  
Xiangli Yang ◽  
Loïc Denis ◽  
Florence Tupin ◽  
Wen Yang

Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) images. Many different schemes have been proposed for the restoration of intensity SAR images. Among the different possible approaches, methods based on convolutional neural networks (CNNs) have recently shown to reach state-of-the-art performance for SAR image restoration. CNN training requires good training data: many pairs of speckle-free/speckle-corrupted images. This is an issue in SAR applications, given the inherent scarcity of speckle-free images. To handle this problem, this paper analyzes different strategies one can adopt, depending on the speckle removal task one wishes to perform and the availability of multitemporal stacks of SAR data. The first strategy applies a CNN model, trained to remove additive white Gaussian noise from natural images, to a recently proposed SAR speckle removal framework: MuLoG (MUlti-channel LOgarithm with Gaussian denoising). No training on SAR images is performed, the network is readily applied to speckle reduction tasks. The second strategy considers a novel approach to construct a reliable dataset of speckle-free SAR images necessary to train a CNN model. Finally, a hybrid approach is also analyzed: the CNN used to remove additive white Gaussian noise is trained on speckle-free SAR images. The proposed methods are compared to other state-of-the-art speckle removal filters, to evaluate the quality of denoising and to discuss the pros and cons of the different strategies. Along with the paper, we make available the weights of the trained network to allow its usage by other researchers.


2017 ◽  
Vol 2017 ◽  
pp. 1-18 ◽  
Author(s):  
Xiaohui Zhao ◽  
Yicheng Jiang ◽  
Tania Stathaki

A strategy is introduced for achieving high accuracy in synthetic aperture radar (SAR) automatic target recognition (ATR) tasks. Initially, a novel pose rectification process and an image normalization process are sequentially introduced to produce images with less variations prior to the feature processing stage. Then, feature sets that have a wealth of texture and edge information are extracted with the utilization of wavelet coefficients, where more effective and compact feature sets are acquired by reducing the redundancy and dimensionality of the extracted feature set. Finally, a group of discrimination trees are learned and combined into a final classifier in the framework of Real-AdaBoost. The proposed method is evaluated with the public release database for moving and stationary target acquisition and recognition (MSTAR). Several comparative studies are conducted to evaluate the effectiveness of the proposed algorithm. Experimental results show the distinctive superiority of the proposed method under both standard operating conditions (SOCs) and extended operating conditions (EOCs). Moreover, our additional tests suggest that good recognition accuracy can be achieved even with limited number of training images as long as these are captured with appropriately incremental sample step in target poses.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Ming Liu ◽  
Shichao Chen ◽  
Fugang Lu ◽  
Junsheng Liu

Dictionary construction is a key factor for the sparse representation- (SR-) based algorithms. It has been verified that the learned dictionaries are more effective than the predefined ones. In this paper, we propose a product dictionary learning (PDL) algorithm to achieve synthetic aperture radar (SAR) target configuration recognition. The proposed algorithm obtains the dictionaries from a statistical standpoint to enhance the robustness of the proposed algorithm to noise. And, taking the inevitable multiplicative speckle in SAR images into account, the proposed algorithm employs the product model to describe SAR images. A more accurate description of the SAR image results in higher recognition rates. The accuracy and robustness of the proposed algorithm are validated by the moving and stationary target acquisition and recognition (MSTAR) database.


2020 ◽  
Vol 12 (6) ◽  
pp. 990
Author(s):  
Raveerat Jaturapitpornchai ◽  
Poompat Rattanasuwan ◽  
Masashi Matsuoka ◽  
Ryosuke Nakamura

The limitations in obtaining sufficient datasets for training deep learning networks is preventing many applications from achieving accurate results, especially when detecting new constructions using time-series satellite imagery, since this requires at least two images of the same scene and it must contain new constructions in it. To tackle this problem, we introduce Chronological Order Reverse Network (CORN)—an architecture for detecting newly built constructions in time-series SAR images that does not require a large quantity of training data. The network uses two U-net adaptations to learn the changes between images from both Time 1–Time 2 and Time 2–Time 1 formats, which allows it to learn double the amount of changes in different perspectives. We trained the network with 2028 pairs of 256 × 256 pixel SAR images from ALOS-PALSAR, totaling 4056 pairs for the network to learn from, since it learns from both Time 1–Time 2 and Time 2–Time 1. As a result, the network can detect new constructions more accurately, especially at the building boundary, compared to the original U-net trained by the same amount of training data. The experiment also shows that the model trained with CORN can be used with images from Sentinel-1. The source code is available at https://github.com/Raveerat-titech/CORN.


Sign in / Sign up

Export Citation Format

Share Document