scholarly journals Cloud Removal with Fusion of High Resolution Optical and SAR Images Using Generative Adversarial Networks

2020 ◽  
Vol 12 (1) ◽  
pp. 191 ◽  
Author(s):  
Jianhao Gao ◽  
Qiangqiang Yuan ◽  
Jie Li ◽  
Hai Zhang ◽  
Xin Su

The existence of clouds is one of the main factors that contributes to missing information in optical remote sensing images, restricting their further applications for Earth observation, so how to reconstruct the missing information caused by clouds is of great concern. Inspired by the image-to-image translation work based on convolutional neural network model and the heterogeneous information fusion thought, we propose a novel cloud removal method in this paper. The approach can be roughly divided into two steps: in the first step, a specially designed convolutional neural network (CNN) translates the synthetic aperture radar (SAR) images into simulated optical images in an object-to-object manner; in the second step, the simulated optical image, together with the SAR image and the optical image corrupted by clouds, is fused to reconstruct the corrupted area by a generative adversarial network (GAN) with a particular loss function. Between the first step and the second step, the contrast and luminance of the simulated optical image are randomly altered to make the model more robust. Two simulation experiments and one real-data experiment are conducted to confirm the effectiveness of the proposed method on Sentinel 1/2, GF 2/3 and airborne SAR/optical data. The results demonstrate that the proposed method outperforms state-of-the-art algorithms that also employ SAR images as auxiliary data.

2021 ◽  
Vol 13 (18) ◽  
pp. 3575
Author(s):  
Jie Guo ◽  
Chengyu He ◽  
Mingjin Zhang ◽  
Yunsong Li ◽  
Xinbo Gao ◽  
...  

With the ability for all-day, all-weather acquisition, synthetic aperture radar (SAR) remote sensing is an important technique in modern Earth observation. However, the interpretation of SAR images is a highly challenging task, even for well-trained experts, due to the imaging principle of SAR images and the high-frequency speckle noise. Some image-to-image translation methods are used to convert SAR images into optical images that are closer to what we perceive through our eyes. There exist two weaknesses in these methods: (1) these methods are not designed for an SAR-to-optical translation task, thereby losing sight of the complexity of SAR images and the speckle noise. (2) The same convolution filters in a standard convolution layer are utilized for the whole feature maps, which ignore the details of SAR images in each window and generate images with unsatisfactory quality. In this paper, we propose an edge-preserving convolutional generative adversarial network (EPCGAN) to enhance the structure and aesthetics of the output image by leveraging the edge information of the SAR image and implementing content-adaptive convolution. The proposed edge-preserving convolution (EPC) decomposes the content of the convolution input into texture components and content components and then generates a content-adaptive kernel to modify standard convolutional filter weights for the content components. Based on the EPC, the EPCGAN is presented for SAR-to-optical image translation. It uses a gradient branch to assist in the recovery of structural image information. Experiments on the SEN1-2 dataset demonstrated that the proposed method can outperform other SAR-to-optical methods by recovering more structures and yielding a superior evaluation index.


Author(s):  
L. E. Christovam ◽  
M. H. Shimabukuro ◽  
M. L. B. T. Galo ◽  
E. Honkavaara

Abstract. Most methods developed to map crop fields with high-quality are based on optical image time-series. However, often accuracy of these approaches is deteriorated due to clouds and cloud shadows, which can decrease the availably of optical data required to represent crop phenological stages. In this sense, the objective of this study was to implement and evaluate the conditional Generative Adversarial Network (cGAN) that has been indicated as a potential tool to address the cloud and cloud shadow removal; we also compared it with the Witthaker Smother (WS), which is a well-known data cleaning algorithm. The dataset used to train and assess the methods was the Luis Eduardo Magalhães benchmark for tropical agricultural remote sensing applications. We selected one MSI/Sentinel-2 and C-SAR/Sentinel-1 image pair taken in days as close as possible. A total of 5000 image pair patches were generated to train the cGAN model, which was used to derive synthetic optical pixels for a testing area. Visual analysis, spectral behaviour comparison, and classification were used to evaluate and compare the pixels generated with the cGAN and WS against the pixel values from the real image. The cGAN provided consistent pixel values for most crop types in comparison to the real pixel values and outperformed the WS significantly. The results indicated that the cGAN has potential to fill cloud and cloud shadow gaps in optical image time-series.


Author(s):  
S. Anthoniraj ◽  
P. Karthikeyan ◽  
V. Vivek

Agriculture crop demand is increasing day by day because of population. Crop production can be increased by removing weeds in the agriculture field. However, weed detection is a complicated problem in the agriculture field. The main objective of this paper is to improve the accuracy of weed detection by combining generative adversarial networks and convolutional neural networks. We have implemented deep learning models, namely Generative Adversarial Network and Deep Convolutional Neural Network (GAN-DCNN), AlexNet, VGG16, ResNet50, and Google Net perform the detection of the weed. A generative Adversarial Network generates the weed image, and Deep Convolutional Neural Network detects the weed in the image. GAN-DCNN method outperforms than existing weed detection method. Simulation results confirm that the proposed GAN-DCNN has improved performance with a maximum weed detection rate of 87.12 and 96.34 accuracies.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


Author(s):  
Ramesh Adhikari ◽  
Suresh Pokharel

Data augmentation is widely used in image processing and pattern recognition problems in order to increase the richness in diversity of available data. It is commonly used to improve the classification accuracy of images when the available datasets are limited. Deep learning approaches have demonstrated an immense breakthrough in medical diagnostics over the last decade. A significant amount of datasets are needed for the effective training of deep neural networks. The appropriate use of data augmentation techniques prevents the model from over-fitting and thus increases the generalization capability of the network while testing afterward on unseen data. However, it remains a huge challenge to obtain such a large dataset from rare diseases in the medical field. This study presents the synthetic data augmentation technique using Generative Adversarial Networks to evaluate the generalization capability of neural networks using existing data more effectively. In this research, the convolutional neural network (CNN) model is used to classify the X-ray images of the human chest in both normal and pneumonia conditions; then, the synthetic images of the X-ray from the available dataset are generated by using the deep convolutional generative adversarial network (DCGAN) model. Finally, the CNN model is trained again with the original dataset and augmented data generated using the DCGAN model. The classification performance of the CNN model is improved by 3.2% when the augmented data were used along with the originally available dataset.


Neural Networks (ANN) has evolved through many stages in the last three decades with many researchers contributing in this challenging field. With the power of math complex problems can also be solved by ANNs. ANNs like Convolutional Neural Network (CNN), Deep Neural network, Generative Adversarial Network (GAN), Long Short Term Memory (LSTM) network, Recurrent Neural Network (RNN), Ordinary Differential Network etc., are playing promising roles in many MNCs and IT industries for their predictions and accuracy. In this paper, Convolutional Neural Network is used for prediction of Beep sounds in high noise levels. Based on Supervised Learning, the research is developed the best CNN architecture for Beep sound recognition in noisy situations. The proposed method gives better results with an accuracy of 96%. The prototype is tested with few architectures for the training and test data out of which a two layer CNN classifier predictions were the best.


2020 ◽  
Vol 12 (6) ◽  
pp. 944 ◽  
Author(s):  
Jin Zhang ◽  
Hao Feng ◽  
Qingli Luo ◽  
Yu Li ◽  
Jujie Wei ◽  
...  

Oil spill detection plays an important role in marine environment protection. Quad-polarimetric Synthetic Aperture Radar (SAR) has been proved to have great potential for this task, and different SAR polarimetric features have the advantages to recognize oil spill areas from other look-alikes. In this paper we proposed an oil spill detection method based on convolutional neural network (CNN) and Simple Linear Iterative Clustering (SLIC) superpixel. Experiments were conducted on three Single Look Complex (SLC) quad-polarimetric SAR images obtained by Radarsat-2 and Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). Several groups of polarized parameters, including H/A/Alpha decomposition, Single-Bounce Eigenvalue Relative Difference (SERD), correlation coefficients, conformity coefficients, Freeman 3-component decomposition, Yamaguchi 4-component decomposition were extracted as feature sets. Among all considered polarimetric features, Yamaguchi parameters achieved the highest performance with total Mean Intersection over Union (MIoU) of 90.5%. It is proved that the SLIC superpixel method significantly improved the oil spill classification accuracy on all the polarimetric feature sets. The classification accuracy of all kinds of targets types were improved, and the largest increase on mean MIoU of all features sets was on emulsions by 21.9%.


Sign in / Sign up

Export Citation Format

Share Document