scholarly journals Statistics Learning Network Based on the Quadratic Form for SAR Image Classification

2019 ◽  
Vol 11 (3) ◽  
pp. 282 ◽  
Author(s):  
Chu He ◽  
Bokun He ◽  
Xinlong Liu ◽  
Chenyao Kang ◽  
Mingsheng Liao

The convolutional neural network (CNN) has shown great potential in many fields; however, transferring this potential to synthetic aperture radar (SAR) image interpretation is still a challenging task. The coherent imaging mechanism causes the SAR signal to present strong fluctuations, and this randomness property calls for many degrees of freedom (DoFs) for the SAR image description. In this paper, a statistics learning network (SLN) based on the quadratic form is presented. The statistical features are expected to be fitted in the SLN for SAR image representation. (i) Relying on the quadratic form in linear algebra theory, a quadratic primitive is developed to comprehensively learn the elementary statistical features. This primitive is an extension to the convolutional primitive that involves both nonlinear and linear transformations and provides more flexibility in feature extraction. (ii) With the aid of this quadratic primitive, the SLN is proposed for the classification task. In the SLN, different types of statistics of SAR images are automatically extracted for representation. Experimental results on three datasets show that the SLN outperforms a standard CNN and traditional texture-based methods and has potential for SAR image classification.

2021 ◽  
Vol 13 (2) ◽  
pp. 328
Author(s):  
Wenkai Liang ◽  
Yan Wu ◽  
Ming Li ◽  
Yice Cao ◽  
Xin Hu

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.


2013 ◽  
Vol 760-762 ◽  
pp. 1486-1490
Author(s):  
Ding Ding Jiang ◽  
De Rong Cai ◽  
Qiang Wei

SAR image recognition is an important content of of aviation image interpretation work. In this paper, the characteristics of SAR images a practical significance of morphological filtering neural network model and its adaptive BP learning algorithm. As can be seen through the experimental results, the algorithm can not only adapt to the complex and diverse background environment, and has a displacement of the same continuous moving target detection capability, telescopic invariant and rotation invariant features.


Author(s):  
M. Schmitt ◽  
L. H. Hughes ◽  
M. Körner ◽  
X. X. Zhu

In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.


Author(s):  
D. Devapal ◽  
S. S. Kumar ◽  
R. Sethunadh

Synthetic Aperture Radar (SAR) is an all-weather, day and night satellite imaging technology where the radar is mounted on aircraft and successive pulses of radio waves are transmitted to illuminate the target scene. The signal processing of the recorded backscattered echoes produce SAR images. SAR images contain inherent multiplicative speckle noise which is formed due to the constructive and destructive interference of transmitted signals with the returning signals. Speckle noise appears as granular patterns and makes the image interpretation difficult. Non-local means approaches like Block Matching and 3D filtering (BM3D) are effective scheme for removing speckle noise from SAR images. This method gives good performance for additive noise but is not adaptive to curved edges and discontinuities that occur in SAR images affected by multiplicative noise. This paper proposes a three-step refined algorithm to adapt BM3D for despeckling multiplicative speckle noise. In the proposed scheme curvelet is used to find the transform coefficients and this modification in the transform domain improves the despeckling accuracy of BM3D. Also Wiener filtering is replaced with Importance Sampling Unscented Kalman Filtering (ISUKF) for better adapting to discontinuities in the real SAR image. An improved method of grouping is proposed here based on Manhattan distance which better adapts to constantly changing multiplicative noise statistics. A detailed comparative study is carried out on each step using various well-known performance measures. From the results, it is found that the proposed Curvelet-ISUKF-Manhattan BM3D (CIM-BM3D) method of despeckling has better values for all the performance measure and the results are also visually verified.


2021 ◽  
Vol 14 (1) ◽  
pp. 25
Author(s):  
Kaiyang Ding ◽  
Junfeng Yang ◽  
Zhao Wang ◽  
Kai Ni ◽  
Xiaohao Wang ◽  
...  

Traditional ship identification systems have difficulty in identifying illegal or broken ships, but the wakes generated by ships can be used as a major feature for identification. However, multi-ship and multi-scale wake detection is also a big challenge. This paper combines the geometric and pixel characteristics of ships and their wakes in Synthetic Aperture Radar (SAR) images and proposes a method for multi-ship and multi-scale wake detection. This method first detects the highlight pixel area in the image and then generates specific windows around the centroid, thereby detecting wakes of different sizes in different areas. In addition, all wake components can be located completely based on wake clustering, the statistical features of wake axis pixels can be used to determine the visible length of the wake. Test results on the Gaofen-3 SAR image show the special potential of the method for wake detection.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 871 ◽  
Author(s):  
Chu He ◽  
Dehui Xiong ◽  
Qingyi Zhang ◽  
Mingsheng Liao

Thanks to the availability of large-scale data, deep Convolutional Neural Networks (CNNs) have witnessed success in various applications of computer vision. However, the performance of CNNs on Synthetic Aperture Radar (SAR) image classification is unsatisfactory due to the lack of well-labeled SAR data, as well as the differences in imaging mechanisms between SAR images and optical images. Therefore, this paper addresses the problem of SAR image classification by employing the Generative Adversarial Network (GAN) to produce more labeled SAR data. We propose special GANs for generating SAR images to be used in the training process. First, we incorporate the quadratic operation into the GAN, extending the convolution to make the discriminator better represent the SAR data; second, the statistical characteristics of SAR images are integrated into the GAN to make its value function more reasonable; finally, two types of parallel connected GANs are designed, one of which we call PWGAN, combining the Deep Convolutional GAN (DCGAN) and Wasserstein GAN with Gradient Penalty (WGAN-GP) together in the structure, and the other, which we call CNN-PGAN, applying a pre-trained CNN as a discriminator to the parallel GAN. Both PWGAN and CNN-PGAN consist of a number of discriminators and generators according to the number of target categories. Experimental results on the TerraSAR-X single polarization dataset demonstrate the effectiveness of the proposed method.


Author(s):  
Jiankun Chen ◽  
Xiaolan Qiu ◽  
Chuanzhao Han ◽  
Yirong Wu

Recent neuroscience research results show that the nerve information in the brain is not only encoded by the spatial information. Spiking neural network based on pulse frequency coding plays a very important role in dealing with the problem of brain signal, especially complicated space-time information. In this paper, an unsupervised learning algorithm for bilayer feedforward spiking neural networks based on spike-timing dependent plasticity (STDP) competitiveness is proposed and applied to SAR image classification on MSTAR for the first time. The SNN learns autonomously from the input value without any labeled signal and the overall classification accuracy of SAR targets reached 80.8%. The experimental results show that the algorithm adopts the synaptic neurons and network structure with stronger biological rationality, and has the ability to classify targets on SAR image. Meanwhile, the feature map extraction ability of neurons is visualized by the generative property of SNN, which is a beneficial attempt to apply the brain-like neural network into SAR image interpretation.


2021 ◽  
Vol 13 (9) ◽  
pp. 1772
Author(s):  
Zhenpeng Feng ◽  
Mingzhe Zhu ◽  
Ljubiša Stanković ◽  
Hongbing Ji

Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentation, and target recognition, which greatly reduce the efficiency of data processing. In an era of deep learning, numerous automatic target recognition methods have been proposed based on convolutional neural networks (CNNs) due to their strong capabilities for data abstraction and mining. In contrast to general methods, CNNs own an end-to-end structure where complex data preprocessing is not needed, thus the efficiency can be improved dramatically once a CNN is well trained. However, the recognition mechanism of a CNN is unclear, which hinders its application in many scenarios. In this paper, Self-Matching class activation mapping (CAM) is proposed to visualize what a CNN learns from SAR images to make a decision. Self-Matching CAM assigns a pixel-wise weight matrix to feature maps of different channels by matching them with the input SAR image. By using Self-Matching CAM, the detailed information of the target can be well preserved in an accurate visual explanation heatmap of a CNN for SAR image interpretation. Numerous experiments on a benchmark dataset (MSTAR) verify the validity of Self-Matching CAM.


PIERS Online ◽  
2007 ◽  
Vol 3 (5) ◽  
pp. 625-628
Author(s):  
Jian Yang ◽  
Xiaoli She ◽  
Tao Xiong

Sign in / Sign up

Export Citation Format

Share Document