scholarly journals AUTOMATIC CLOUD DETECTION METHOD BASED ON GENERATIVE ADVERSARIAL NETWORKS IN REMOTE SENSING IMAGES

Author(s):  
J. Li ◽  
Z. Wu ◽  
Z. Hu ◽  
Y. Zhang ◽  
M. Molinier

Abstract. Clouds in optical remote sensing images seriously affect the visibility of background pixels and greatly reduce the availability of images. It is necessary to detect clouds before processing images. In this paper, a novel cloud detection method based on attentive generative adversarial network (Auto-GAN) is proposed for cloud detection. Our main idea is to inject visual attention into the domain transformation to detect clouds automatically. First, we use a discriminator (D) to distinguish between cloudy and cloud free images. Then, a segmentation network is used to detect the difference between cloudy and cloud-free images (i.e. clouds). Last, a generator (G) is used to fill in the different regions in cloud image in order to confuse the discriminator. Auto-GAN only requires images and their labels (1 for a cloud-free image, 0 for a cloudy image) in the training phase which is more time-saving to acquire than existing methods based on CNNs that require pixel-level labels. Auto-GAN is applied to cloud detection in Sentinel-2A Level 1C imagery. The results indicate that Auto-GAN method performs well in cloud detection over different land surfaces.

2021 ◽  
Vol 13 (15) ◽  
pp. 2910
Author(s):  
Xiaolong Li ◽  
Hong Zheng ◽  
Chuanzhao Han ◽  
Wentao Zheng ◽  
Hao Chen ◽  
...  

Clouds constitute a major obstacle to the application of optical remote-sensing images as they destroy the continuity of the ground information in the images and reduce their utilization rate. Therefore, cloud detection has become an important preprocessing step for optical remote-sensing image applications. Due to the fact that the features of clouds in current cloud-detection methods are mostly manually interpreted and the information in remote-sensing images is complex, the accuracy and generalization of current cloud-detection methods are unsatisfactory. As cloud detection aims to extract cloud regions from the background, it can be regarded as a semantic segmentation problem. A cloud-detection method based on deep convolutional neural networks (DCNN)—that is, a spatial folding–unfolding remote-sensing network (SFRS-Net)—is introduced in the paper, and the reason for the inaccuracy of DCNN during cloud region segmentation and the concept of space folding/unfolding is presented. The backbone network of the proposed method adopts an encoder–decoder structure, in which the pooling operation in the encoder is replaced by a folding operation, and the upsampling operation in the decoder is replaced by an unfolding operation. As a result, the accuracy of cloud detection is improved, while the generalization is guaranteed. In the experiment, the multispectral data of the GaoFen-1 (GF-1) satellite is collected to form a dataset, and the overall accuracy (OA) of this method reaches 96.98%, which is a satisfactory result. This study aims to develop a method that is suitable for cloud detection and can complement other cloud-detection methods, providing a reference for researchers interested in cloud detection of remote-sensing images.


2020 ◽  
Vol 12 (1) ◽  
pp. 152 ◽  
Author(s):  
Ting Nie ◽  
Xiyu Han ◽  
Bin He ◽  
Xiansheng Li ◽  
Hongxing Liu ◽  
...  

Ship detection in panchromatic optical remote sensing images is faced with two major challenges, locating candidate regions from complex backgrounds quickly and describing ships effectively to reduce false alarms. Here, a practical method was proposed to solve these issues. Firstly, we constructed a novel visual saliency detection method based on a hyper-complex Fourier transform of a quaternion to locate regions of interest (ROIs), which can improve the accuracy of the subsequent discrimination process for panchromatic images, compared with the phase spectrum quaternary Fourier transform (PQFT) method. In addition, the Gaussian filtering of different scales was performed on the transformed result to synthesize the best saliency map. An adaptive method based on GrabCut was then used for binary segmentation to extract candidate positions. With respect to the discrimination stage, a rotation-invariant modified local binary pattern (LBP) description was achieved by combining shape, texture, and moment invariant features to describe the ship targets more powerfully. Finally, the false alarms were eliminated through SVM training. The experimental results on panchromatic optical remote sensing images demonstrated that the presented saliency model under various indicators is superior, and the proposed ship detection method is accurate and fast with high robustness, based on detailed comparisons to existing efforts.


2020 ◽  
Vol 12 (19) ◽  
pp. 3190
Author(s):  
Xiaolong Li ◽  
Hong Zheng ◽  
Chuanzhao Han ◽  
Haibo Wang ◽  
Kaihan Dong ◽  
...  

Cloud pixels have massively reduced the utilization of optical remote sensing images, highlighting the importance of cloud detection. According to the current remote sensing literature, methods such as the threshold method, statistical method and deep learning (DL) have been applied in cloud detection tasks. As some cloud areas are translucent, areas blurred by these clouds still retain some ground feature information, which blurs the spectral or spatial characteristics of these areas, leading to difficulty in accurate detection of cloud areas by existing methods. To solve the problem, this study presents a cloud detection method based on genetic reinforcement learning. Firstly, the factors that directly affect the classification of pixels in remote sensing images are analyzed, and the concept of pixel environmental state (PES) is proposed. Then, PES information and the algorithm’s marking action are integrated into the “PES-action” data set. Subsequently, the rule of “reward–penalty” is introduced and the “PES-action” strategy with the highest cumulative return is learned by a genetic algorithm (GA). Clouds can be detected accurately through the learned “PES-action” strategy. By virtue of the strong adaptability of reinforcement learning (RL) to the environment and the global optimization ability of the GA, cloud regions are detected accurately. In the experiment, multi-spectral remote sensing images of SuperView-1 were collected to build the data set, which was finally accurately detected. The overall accuracy (OA) of the proposed method on the test set reached 97.15%, and satisfactory cloud masks were obtained. Compared with the best DL method disclosed and the random forest (RF) method, the proposed method is superior in precision, recall, false positive rate (FPR) and OA for the detection of clouds. This study aims to improve the detection of cloud regions, providing a reference for researchers interested in cloud detection of remote sensing images.


2020 ◽  
Vol 12 (19) ◽  
pp. 3152
Author(s):  
Luc Courtrai ◽  
Minh-Tan Pham ◽  
Sébastien Lefèvre

This article tackles the problem of detecting small objects in satellite or aerial remote sensing images by relying on super-resolution to increase image spatial resolution, thus the size and details of objects to be detected. We show how to improve the super-resolution framework starting from the learning of a generative adversarial network (GAN) based on residual blocks and then its integration into a cycle model. Furthermore, by adding to the framework an auxiliary network tailored for object detection, we considerably improve the learning and the quality of our final super-resolution architecture, and more importantly increase the object detection performance. Besides the improvement dedicated to the network architecture, we also focus on the training of super-resolution on target objects, leading to an object-focused approach. Furthermore, the proposed strategies do not depend on the choice of a baseline super-resolution framework, hence could be adopted for current and future state-of-the-art models. Our experimental study on small vehicle detection in remote sensing data conducted on both aerial and satellite images (i.e., ISPRS Potsdam and xView datasets) confirms the effectiveness of the improved super-resolution methods to assist with the small object detection tasks.


2021 ◽  
Vol 17 (3) ◽  
pp. 235-247
Author(s):  
Jun Zhang ◽  
Junjun Liu

Remote sensing is an indispensable technical way for monitoring earth resources and environmental changes. However, optical remote sensing images often contain a large number of cloud, especially in tropical rain forest areas, make it difficult to obtain completely cloud-free remote sensing images. Therefore, accurate cloud detection is of great research value for optical remote sensing applications. In this paper, we propose a saliency model-oriented convolution neural network for cloud detection in remote sensing images. Firstly, we adopt Kernel Principal Component Analysis (KCPA) to unsupervised pre-training the network. Secondly, small labeled samples are used to fine-tune the network structure. And, remote sensing images are performed with super-pixel approach before cloud detection to eliminate the irrelevant backgrounds and non-clouds object. Thirdly, the image blocks are input into the trained convolutional neural network (CNN) for cloud detection. Meanwhile, the segmented image will be recovered. Fourth, we fuse the detected result with the saliency map of raw image to further improve the accuracy of detection result. Experiments show that the proposed method can accurately detect cloud. Compared to other state-of-the-art cloud detection method, the new method has better robustness.


2021 ◽  
Vol 13 (18) ◽  
pp. 3617
Author(s):  
Xudong Yao ◽  
Qing Guo ◽  
An Li

Clouds in optical remote sensing images cause spectral information change or loss, that affects image analysis and application. Therefore, cloud detection is of great significance. However, there are some shortcomings in current methods, such as the insufficient extendibility due to using the information of multiple bands, the intense extendibility due to relying on some manually determined thresholds, and the limited accuracy, especially for thin clouds or complex scenes caused by low-level manual features. Combining the above shortcomings and the requirements for efficiency in practical applications, we propose a light-weight deep learning cloud detection network based on DeeplabV3+ architecture and channel attention module (CD-AttDLV3+), only using the most common red–green–blue and near-infrared bands. In the CD-AttDLV3+ architecture, an optimized backbone network-MobileNetV2 is used to reduce the number of parameters and calculations. Atrous spatial pyramid pooling effectively reduces the information loss caused by multiple down-samplings while extracting multi-scale features. CD-AttDLV3+ concatenates more low-level features than DeeplabV3+ to improve the cloud boundary quality. The channel attention module is introduced to strengthen the learning of important channels and improve the training efficiency. Moreover, the loss function is improved to alleviate the imbalance of samples. For the Landsat-8 Biome set, CD-AttDLV3+ achieves the highest accuracy in comparison with other methods, including Fmask, SVM, and SegNet, especially for distinguishing clouds from bright surfaces and detecting light-transmitting thin clouds. It can also perform well on other Landsat-8 and Sentinel-2 images. Experimental results indicate that CD-AttDLV3+ is robust, with a high accuracy and extendibility.


2011 ◽  
Vol 271-273 ◽  
pp. 205-210
Author(s):  
Ying Zhao Ma ◽  
Wei Li Jiao ◽  
Wang Wei

Cloud is an important factor affect the quality of optical remote sensing image. How to automatically detect the cloud cover of an image, reduce of useless data transmission, make great significance of higher data rate usefulness. This paper represent a method based on Lansat5 data, which can automatically mark the location of clouds region in each image, and effective calculated for each cloud cover, remove useless remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document