SAR Image Despeckling by Noisy Reference-Based Deep Learning Method

2020 ◽  
Vol 58 (12) ◽  
pp. 8807-8818
Author(s):  
Xiaoshuang Ma ◽  
Chen Wang ◽  
Zhixiang Yin ◽  
Penghai Wu
Author(s):  
Adugna G. Mullissa ◽  
Diego Marcos ◽  
Devis Tuia ◽  
Martin Herold ◽  
Johannes Reiche

2020 ◽  
Vol 12 (6) ◽  
pp. 1006 ◽  
Author(s):  
Davide Cozzolino ◽  
Luisa Verdoliva ◽  
Giuseppe Scarpa ◽  
Giovanni Poggi

We propose a new method for SAR image despeckling, which performs nonlocal filtering with a deep learning engine. Nonlocal filtering has proven very effective for SAR despeckling. The key idea is to exploit image self-similarities to estimate the hidden signal. In its simplest form, pixel-wise nonlocal means, the target pixel is estimated through a weighted average of neighbors, with weights chosen on the basis of a patch-wise measure of similarity. Here, we keep the very same structure of plain nonlocal means, to ensure interpretability of results, but use a convolutional neural network to assign weights to estimators. Suitable nonlocal layers are used in the network to take into account information in a large analysis window. Experiments on both simulated and real-world SAR images show that the proposed method exhibits state-of-the-art performance. In addition, the comparison of weights generated by conventional and deep learning-based nonlocal means provides new insight into the potential and limits of nonlocal information for SAR despeckling.


2021 ◽  
Vol 13 (18) ◽  
pp. 3636
Author(s):  
Ye Yuan ◽  
Yanxia Wu ◽  
Yan Fu ◽  
Yulei Wu ◽  
Lidan Zhang ◽  
...  

As one of the main sources of remote sensing big data, synthetic aperture radar (SAR) can provide all-day and all-weather Earth image acquisition. However, speckle noise in SAR images brings a notable limitation for its big data applications, including image analysis and interpretation. Deep learning has been demonstrated as an advanced method and technology for SAR image despeckling. Most existing deep-learning-based methods adopt supervised learning and use synthetic speckled images to train the despeckling networks. This is because they need clean images as the references, and it is hard to obtain purely clean SAR images in real-world conditions. However, significant differences between synthetic speckled and real SAR images cause the domain gap problem. In other words, they cannot show superior performance for despeckling real SAR images as they do for synthetic speckled images. Inspired by recent studies on self-supervised denoising, we propose an advanced SAR image despeckling method by virtue of Bernoulli-sampling-based self-supervised deep learning, called SSD-SAR-BS. By only using real speckled SAR images, Bernoulli-sampled speckled image pairs (input–target) were obtained as the training data. Then, a multiscale despeckling network was trained on these image pairs. In addition, a dropout-based ensemble was introduced to boost the network performance. Extensive experimental results demonstrated that our proposed method outperforms the state-of-the-art for speckle noise suppression on both synthetic speckled and real SAR datasets (i.e., Sentinel-1 and TerraSAR-X).


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


2021 ◽  
Author(s):  
Francesco Banterle ◽  
Rui Gong ◽  
Massimiliano Corsini ◽  
Fabio Ganovelli ◽  
Luc Van Gool ◽  
...  

Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


Sign in / Sign up

Export Citation Format

Share Document