scholarly journals TCDNet: Trilateral Change Detection Network for Google Earth Image

2020 ◽  
Vol 12 (17) ◽  
pp. 2669
Author(s):  
Junhao Qian ◽  
Min Xia ◽  
Yonghong Zhang ◽  
Jia Liu ◽  
Yiqing Xu

Change detection is a very important technique for remote sensing data analysis. Its mainstream solutions are either supervised or unsupervised. In supervised methods, most of the existing change detection methods using deep learning are related to semantic segmentation. However, these methods only use deep learning models to process the global information of an image but do not carry out specific trainings on changed and unchanged areas. As a result, many details of local changes could not be detected. In this work, a trilateral change detection network is proposed. The proposed network has three branches (a main module and two auxiliary modules, all of them are composed of convolutional neural networks (CNNs)), which focus on the overall information of bitemporal Google Earth image pairs, the changed areas and the unchanged areas, respectively. The proposed method is end-to-end trainable, and each component in the network does not need to be trained separately.

2021 ◽  
Vol 13 (17) ◽  
pp. 3394 ◽  
Author(s):  
Le Yang ◽  
Yiming Chen ◽  
Shiji Song ◽  
Fan Li ◽  
Gao Huang

Although considerable success has been achieved in change detection on optical remote sensing images, accurate detection of specific changes is still challenging. Due to the diversity and complexity of the ground surface changes and the increasing demand for detecting changes that require high-level semantics, we have to resort to deep learning techniques to extract the intrinsic representations of changed areas. However, one key problem for developing deep learning metho for detecting specific change areas is the limitation of annotated data. In this paper, we collect a change detection dataset with 862 labeled image pairs, where the urban construction-related changes are labeled. Further, we propose a supervised change detection method based on a deep siamese semantic segmentation network to handle the proposed data effectively. The novelty of the method is that the proposed siamese network treats the change detection problem as a binary semantic segmentation task and learns to extract features from the image pairs directly. The siamese architecture as well as the elaborately designed semantic segmentation networks significantly improve the performance on change detection tasks. Experimental results demonstrate the promising performance of the proposed network compared to existing approaches.


2021 ◽  
Vol 11 (19) ◽  
pp. 8996
Author(s):  
Yuwei Cao ◽  
Marco Scaioni

In current research, fully supervised Deep Learning (DL) techniques are employed to train a segmentation network to be applied to point clouds of buildings. However, training such networks requires large amounts of fine-labeled buildings’ point-cloud data, presenting a major challenge in practice because they are difficult to obtain. Consequently, the application of fully supervised DL for semantic segmentation of buildings’ point clouds at LoD3 level is severely limited. In order to reduce the number of required annotated labels, we proposed a novel label-efficient DL network that obtains per-point semantic labels of LoD3 buildings’ point clouds with limited supervision, named 3DLEB-Net. In general, it consists of two steps. The first step (Autoencoder, AE) is composed of a Dynamic Graph Convolutional Neural Network (DGCNN) encoder and a folding-based decoder. It is designed to extract discriminative global and local features from input point clouds by faithfully reconstructing them without any label. The second step is the semantic segmentation network. By supplying a small amount of task-specific supervision, a segmentation network is proposed for semantically segmenting the encoded features acquired from the pre-trained AE. Experimentally, we evaluated our approach based on the Architectural Cultural Heritage (ArCH) dataset. Compared to the fully supervised DL methods, we found that our model achieved state-of-the-art results on the unseen scenes, with only 10% of labeled training data from fully supervised methods as input. Moreover, we conducted a series of ablation studies to show the effectiveness of the design choices of our model.


2015 ◽  
Vol 8 (2) ◽  
pp. 327-335 ◽  
Author(s):  
Daniel Hölbling ◽  
Barbara Friedl ◽  
Clemens Eisank

Abstract Earth observation (EO) data are very useful for the detection of landslides after triggering events, especially if they occur in remote and hardly accessible terrain. To fully exploit the potential of the wide range of existing remote sensing data, innovative and reliable landslide (change) detection methods are needed. Recently, object-based image analysis (OBIA) has been employed for EO-based landslide (change) mapping. The proposed object-based approach has been tested for a sub-area of the Baichi catchment in northern Taiwan. The focus is on the mapping of landslides and debris flows/sediment transport areas caused by the Typhoons Aere in 2004 and Matsa in 2005. For both events, pre- and post-disaster optical satellite images (SPOT-5 with 2.5 m spatial resolution) were analysed. A Digital Elevation Model (DEM) with 5 m spatial resolution and its derived products, i.e., slope and curvature, were additionally integrated in the analysis to support the semi-automated object-based landslide mapping. Changes were identified by comparing the normalised values of the Normalized Difference Vegetation Index (NDVI) and the Green Normalized Difference Vegetation Index (GNDVI) of segmentation-derived image objects between pre- and post-event images and attributed to landslide classes.


2021 ◽  
Author(s):  
Federico Figari Tomenotti

Change detection is a well-known topic of remote sensing. The goal is to track and monitor the evolution of changes affecting the Earth surface over time. The recently increased availability in remote sensing data for Earth observation and in computational power has raised the interest in this field of research. In particular, the keywords “multitemporal” and “heterogeneous” play prominent roles. The former refers to the availability and the comparison of two or more satellite images of the same place on the ground, in order to find changes and track the evolution of the observed surface, maybe with different time sensitivities. The latter refers to the capability of performing change detection with images coming from different sources, corresponding to different sensors, wavelengths, polarizations, acquisition geometries, etc. This thesis addresses the challenging topic of multitemporal change detection with heterogeneous remote sensing images. It proposes a novel approach, taking inspiration from recent developments in the literature. The proposed method is based on deep learning - involving autoencoders of convolutional neural networks - and represents an exapmple of unsupervised change detection. A major novelty of the work consists in including a prior information model, used to make the method unsupervised, within a well-established algorithm such as the canonical correlation analysis, and in combining these with a deep learning framework to give rise to an image translation method able to compare heterogeneous images regardless of their highly different domains. The theoretical analysis is supported by experimental results, comparing the proposed methodology to the state of the art of this discipline. Two different datasets were used for the experiments, and the results obtained on both of them show the effectiveness of the proposed method.


Author(s):  
A. R. D. Putri ◽  
P. Sidiropoulos ◽  
J.-P. Muller

<p><strong>Abstract.</strong> The surface of Mars has been imaged in visible wavelengths for more than 40 years since the first flyby image taken by Mariner 4 in 1964. With higher resolution from orbit from MOC-NA, HRSC, CTX, THEMIS, and HiRISE, changes can now be observed on high-resolution images from different instruments, including spiders (Piqueux et al., 2003) near the south pole and Recurring Slope Lineae (McEwen et al., 2011) observable in HiRISE resolution. With the huge amount of data and the small number of datasets available on Martian changes, semi-automatic or automatic methods are preferred to help narrow down surface change candidates over a large area.</p><p>To detect changes automatically in Martian images, we propose a method based on a denoising autoencoder to map the first Martian image to the second Martian image. Both images have been automatically coregistered and orthorectified using ACRO (Autocoregistration and Orthorectification) (Sidiropoulos and Muller, 2018) to the same base image, HRSC (High-Resolution Stereo Camera) (Neukum and Jaumann, 2004; Putri et al., 2018) and CTX (Context Camera) (Tao et al., 2018) orthorectified using their DTMs (Digital Terrain Models) to reduce the number of false positives caused by the difference in instruments and viewing conditions. Subtraction of the codes of the images are then inputted to an anomaly detector to look for change candidates. We compare different anomaly detection methods in our change detection pipeline: OneClassSVM, Isolation Forest, and, Gaussian Mixture Models in known areas of changes such as Nicholson Crater (dark slope streak), using image pairs from the same and different instruments.</p>


Author(s):  
Bo Chen ◽  
Hua Zhang ◽  
Yonglong Li ◽  
Shuang Wang ◽  
Huaifang Zhou ◽  
...  

Abstract An increasing number of detection methods based on computer vision are applied to detect cracks in water conservancy infrastructure. However, most studies directly use existing feature extraction networks to extract cracks information, which are proposed for open-source datasets. As the cracks distribution and pixel features are different from these data, the extracted cracks information is incomplete. In this paper, a deep learning-based network for dam surface crack detection is proposed, which mainly addresses the semantic segmentation of cracks on the dam surface. Particularly, we design a shallow encoding network to extract features of crack images based on the statistical analysis of cracks. Further, to enhance the relevance of contextual information, we introduce an attention module into the decoding network. During the training, we use the sum of Cross-Entropy and Dice Loss as the loss function to overcome data imbalance. The quantitative information of cracks is extracted by the imaging principle after using morphological algorithms to extract the morphological features of the predicted result. We built a manual annotation dataset containing 1577 images to verify the effectiveness of the proposed method. This method achieves the state-of-the-art performance on our dataset. Specifically, the precision, recall, IoU, F1_measure, and accuracy achieve 90.81%, 81.54%, 75.23%, 85.93%, 99.76%, respectively. And the quantization error of cracks is less than 4%.


2021 ◽  
Vol 13 (14) ◽  
pp. 2646
Author(s):  
Quanfu Xu ◽  
Keming Chen ◽  
Guangyao Zhou ◽  
Xian Sun

Change detection based on deep learning has made great progress recently, but there are still some challenges, such as the small data size in open-labeled datasets, the different viewpoints in image pairs, and the poor similarity measures in feature pairs. To alleviate these problems, this paper presents a novel change capsule network by taking advantage of a capsule network that can better deal with the different viewpoints and can achieve satisfactory performance with small training data for optical remote sensing image change detection. First, two identical non-shared weight capsule networks are designed to extract the vector-based features of image pairs. Second, the unchanged region reconstruction module is adopted to keep the feature space of the unchanged region more consistent. Third, vector cosine and vector difference are utilized to compare the vector-based features in a capsule network efficiently, which can enlarge the separability between the changed pixels and the unchanged pixels. Finally, a binary change map can be produced by analyzing both the vector cosine and vector difference. From the unchanged region reconstruction module and the vector cosine and vector difference module, the extracted feature pairs in a change capsule network are more comparable and separable. Moreover, to test the effectiveness of the proposed change capsule network in dealing with the different viewpoints in multi-temporal images, we collect a new change detection dataset from a taken-over Al Udeid Air Basee (AUAB) using Google Earth. The results of the experiments carried out on the AUAB dataset show that a change capsule network can better deal with the different viewpoints and can improve the comparability and separability of feature pairs. Furthermore, a comparison of the experimental results carried out on the AUAB dataset and SZTAKI AirChange Benchmark Set demonstrates the effectiveness and superiority of the proposed method.


Author(s):  
S. Havivi ◽  
I. Schvartzman ◽  
S. Maman ◽  
A. Marinoni ◽  
P. Gamba ◽  
...  

Satellite images are used widely in the risk cycle to understand the exposure, refine hazard maps and quickly provide an assessment after a natural or man-made disaster. Though there are different types of satellite images (e.g. optical, radar) these have not been combined for risk assessments. The characteristics of different remote sensing data type may be extremely valuable for monitoring and evaluating the impacts of disaster events, to extract additional information thus making it available for emergency situations. To base this approach, two different change detection methods, for two different sensor's data were used: Coherence Change Detection (CCD) for SAR data and Covariance Equalization (CE) for multispectral imagery. The CCD provides an identification of the stability of an area, and shows where changes have occurred. CCD shows subtle changes with an accuracy of several millimetres to centimetres. The CE method overcomes the atmospheric effects differences between two multispectral images, taken at different times. Therefore, areas that had undergone a major change can be detected. To achieve our goals, we focused on the urban areas affected by the tsunami event in Sendai, Japan that occurred on March 11, 2011 which affected the surrounding area, coastline and inland. High resolution TerraSAR-X (TSX) and Landsat 7 images, covering the research area, were acquired for the period before and after the event. All pre-processed and processed according to each sensor. Both results, of the optical and SAR algorithms, were combined by resampling the spatial resolution of the Multispectral data to the SAR resolution. This was applied by spatial linear interpolation. A score representing the damage level in both products was assigned. The results of both algorithms, high level of damage is shown in the areas closer to the sea and shoreline. Our approach, combining SAR and multispectral images, leads to more reliable information and provides a complete scene for the emergency response following an event.


Author(s):  
E. Bousias Alexakis ◽  
C. Armenakis

Abstract. Over the past few years, many research works have utilized Convolutional Neural Networks (CNN) in the development of fully automated change detection pipelines from high resolution satellite imagery. Even though CNN architectures can achieve state-of-the-art results in a wide variety of vision tasks, including change detection applications, they require extensive amounts of labelled training examples in order to be able to generalize to new data through supervised learning. In this work we experiment with the implementation of a semi-supervised training approach in an attempt to improve the image semantic segmentation performance of models trained using a small number of labelled image pairs by leveraging information from additional unlabelled image samples. The approach is based on the Mean Teacher method, a semi-supervised approach, successfully applied for image classification and for sematic segmentation of medical images. Mean Teacher uses an exponential moving average of the model weights from previous epochs to check the consistency of the model’s predictions under various perturbations. Our goal is to examine whether its application in a change detection setting can result in analogous performance improvements. The preliminary results of the proposed method appear to be compatible to the results of the traditional fully supervised training. Research is continuing towards fine-tuning of the method and reaching solid conclusions with respect to the potential benefits of the semi-supervised learning approaches in image change detection applications.


2021 ◽  
Vol 13 (23) ◽  
pp. 4918
Author(s):  
Te Han ◽  
Yuqi Tang ◽  
Xin Yang ◽  
Zefeng Lin ◽  
Bin Zou ◽  
...  

To solve the problems of susceptibility to image noise, subjectivity of training sample selection, and inefficiency of state-of-the-art change detection methods with heterogeneous images, this study proposes a post-classification change detection method for heterogeneous images with improved training of hierarchical extreme learning machine (HELM). After smoothing the images to suppress noise, a sample selection method is defined to train the HELM for each image, in which the feature extraction is respectively implemented for heterogeneous images and the parameters need not be fine-tuned. Then, the multi-temporal feature maps extracted from the trained HELM are segmented to obtain classification maps and then compared to generate a change map with changed types. The proposed method is validated experimentally by using one set of synthetic aperture radar (SAR) images obtained from Sentinel-1, one set of optical images acquired from Google Earth, and two sets of heterogeneous SAR and optical images. The results show that compared to state-of-the-art change detection methods, the proposed method can improve the accuracy of change detection by more than 8% in terms of the kappa coefficient and greatly reduce run time regardless of the type of images used. Such enhancement reflects the robustness and superiority of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document