scholarly journals Deep Learning Approaches to Earth Observation Change Detection

2021 ◽  
Vol 13 (20) ◽  
pp. 4083
Author(s):  
Antonio Di Pilato ◽  
Nicolò Taggio ◽  
Alexis Pompili ◽  
Michele Iacobellis ◽  
Adriano Di Florio ◽  
...  

The interest in change detection in the field of remote sensing has increased in the last few years. Searching for changes in satellite images has many useful applications, ranging from land cover and land use analysis to anomaly detection. In particular, urban change detection provides an efficient tool to study urban spread and growth through several years of observation. At the same time, change detection is often a computationally challenging and time-consuming task; therefore, a standard approach with manual detection of the elements of interest by experts in the domain of Earth Observation needs to be replaced by innovative methods that can guarantee optimal results with unquestionable value and within reasonable time. In this paper, we present two different approaches to change detection (semantic segmentation and classification) that both exploit convolutional neural networks to address these particular needs, which can be further refined and used in post-processing workflows for a large variety of applications.

2021 ◽  
Vol 15 (02) ◽  
Author(s):  
Annus Zulfiqar ◽  
Muhammad M. Ghaffar ◽  
Muhammad Shahzad ◽  
Christian Weis ◽  
Muhammad I. Malik ◽  
...  

Author(s):  
M. Abdessetar ◽  
Y. Zhong

Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object’s shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T<sub>0</sub> to shape T<sub>1</sub> and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).


2021 ◽  
Vol 12 (1) ◽  
pp. 26-31
Author(s):  
A. Abhyankar ◽  
T. Sahoo ◽  
B. Seth ◽  
P. Mohapatra ◽  
S. Palai ◽  
...  

The study focuses on the mangroves in two districts namely, Mumbai and Mumbai Suburban. Mumbai, a coastal megacity, is a financial capital of the country with high population density. Mumbai is facing depletion of coastal resources due to land scarcity and large developmental projects. Thus, it is important to monitor these resources accurately and protect the stakeholders’ interest. Cloud-free satellite images of IRS P6 LISS III of 2004 and 2013 were procured from National Remote Sensing Centre (NRSC), Hyderabad. Two bands of visible and one band of NIR were utilized for landcover classification. Supervised Classification with Maximum Likelihood Estimator was used for the classification. The images were classified into various landcovers classes namely, Dense Mangroves, Sparse Mangroves and Others. Two software’s namely, ERDAS Imagine and GRAM++ were used for landcover classification and change detection analysis. It was observed that the total mangrove area in Mumbai in 2004 and 2013 was 50.52 square kilometers and 48.7 square kilometers respectively. In the year 2004 and 2013, contribution of sparse mangroves in the study area was 72.31 % and 87.06% respectively.


2019 ◽  
Vol 11 (18) ◽  
pp. 2173 ◽  
Author(s):  
Jinlei Ma ◽  
Zhiqiang Zhou ◽  
Bo Wang ◽  
Hua Zong ◽  
Fei Wu

To accurately detect ships of arbitrary orientation in optical remote sensing images, we propose a two-stage CNN-based ship-detection method based on the ship center and orientation prediction. Center region prediction network and ship orientation classification network are constructed to generate rotated region proposals, and then we can predict rotated bounding boxes from rotated region proposals to locate arbitrary-oriented ships more accurately. The two networks share the same deconvolutional layers to perform semantic segmentation for the prediction of center regions and orientations of ships, respectively. They can provide the potential center points of the ships helping to determine the more confident locations of the region proposals, as well as the ship orientation information, which is beneficial to the more reliable predetermination of rotated region proposals. Classification and regression are then performed for the final ship localization. Compared with other typical object detection methods for natural images and ship-detection methods, our method can more accurately detect multiple ships in the high-resolution remote sensing image, irrespective of the ship orientations and a situation in which the ships are docked very closely. Experiments have demonstrated the promising improvement of ship-detection performance.


2020 ◽  
Vol 12 (11) ◽  
pp. 1868 ◽  
Author(s):  
Huihui Dong ◽  
Wenping Ma ◽  
Yue Wu ◽  
Jun Zhang ◽  
Licheng Jiao

Traditional change detection (CD) methods operate in the simple image domain or hand-crafted features, which has less robustness to the inconsistencies (e.g., brightness and noise distribution, etc.) between bitemporal satellite images. Recently, deep learning techniques have reported compelling performance on robust feature learning. However, generating accurate semantic supervision that reveals real change information in satellite images still remains challenging, especially for manual annotation. To solve this problem, we propose a novel self-supervised representation learning method based on temporal prediction for remote sensing image CD. The main idea of our algorithm is to transform two satellite images into more consistent feature representations through a self-supervised mechanism without semantic supervision and any additional computations. Based on the transformed feature representations, a better difference image (DI) can be obtained, which reduces the propagated error of DI on the final detection result. In the self-supervised mechanism, the network is asked to identify different sample patches between two temporal images, namely, temporal prediction. By designing the network for the temporal prediction task to imitate the discriminator of generative adversarial networks, the distribution-aware feature representations are automatically captured and the result with powerful robustness can be acquired. Experimental results on real remote sensing data sets show the effectiveness and superiority of our method, improving the detection precision up to 0.94–35.49%.


2021 ◽  
Vol 13 (19) ◽  
pp. 3836
Author(s):  
Clément Dechesne ◽  
Pierre Lassalle ◽  
Sébastien Lefèvre

In recent years, numerous deep learning techniques have been proposed to tackle the semantic segmentation of aerial and satellite images, increase trust in the leaderboards of main scientific contests and represent the current state-of-the-art. Nevertheless, despite their promising results, these state-of-the-art techniques are still unable to provide results with the level of accuracy sought in real applications, i.e., in operational settings. Thus, it is mandatory to qualify these segmentation results and estimate the uncertainty brought about by a deep network. In this work, we address uncertainty estimations in semantic segmentation. To do this, we relied on a Bayesian deep learning method, based on Monte Carlo Dropout, which allows us to derive uncertainty metrics along with the semantic segmentation. Built on the most widespread U-Net architecture, our model achieves semantic segmentation with high accuracy on several state-of-the-art datasets. More importantly, uncertainty maps are also derived from our model. While they allow for the performance of a sounder qualitative evaluation of the segmentation results, they also include valuable information to improve the reference databases.


Author(s):  
E. Bousias Alexakis ◽  
C. Armenakis

Abstract. Over the past few years, many research works have utilized Convolutional Neural Networks (CNN) in the development of fully automated change detection pipelines from high resolution satellite imagery. Even though CNN architectures can achieve state-of-the-art results in a wide variety of vision tasks, including change detection applications, they require extensive amounts of labelled training examples in order to be able to generalize to new data through supervised learning. In this work we experiment with the implementation of a semi-supervised training approach in an attempt to improve the image semantic segmentation performance of models trained using a small number of labelled image pairs by leveraging information from additional unlabelled image samples. The approach is based on the Mean Teacher method, a semi-supervised approach, successfully applied for image classification and for sematic segmentation of medical images. Mean Teacher uses an exponential moving average of the model weights from previous epochs to check the consistency of the model’s predictions under various perturbations. Our goal is to examine whether its application in a change detection setting can result in analogous performance improvements. The preliminary results of the proposed method appear to be compatible to the results of the traditional fully supervised training. Research is continuing towards fine-tuning of the method and reaching solid conclusions with respect to the potential benefits of the semi-supervised learning approaches in image change detection applications.


2020 ◽  
Vol 12 (5) ◽  
pp. 852
Author(s):  
Xin Pan ◽  
Jian Zhao ◽  
Jun Xu

Since the result images obtained by deep semantic segmentation neural networks are usually not perfect, especially at object borders, the conditional random field (CRF) method is frequently utilized in the result post-processing stage to obtain the corrected classification result image. The CRF method has achieved many successes in the field of computer vision, but when it is applied to remote sensing images, overcorrection phenomena may occur. This paper proposes an end-to-end and localized post-processing method (ELP) to correct the result images of high-resolution remote sensing image classification methods. ELP has two advantages. (1) End-to-end evaluation: ELP can identify which locations of the result image are highly suspected of having errors without requiring samples. This characteristic allows ELP to be adapted to an end-to-end classification process. (2) Localization: Based on the suspect areas, ELP limits the CRF analysis and update area to a small range and controls the iteration termination condition. This characteristic avoids the overcorrections caused by the global processing of the CRF. In the experiments, ELP is used to correct the classification results obtained by various deep semantic segmentation neural networks. Compared with traditional methods, the proposed method more effectively corrects the classification result and improves classification accuracy.


Sign in / Sign up

Export Citation Format

Share Document