scholarly journals EARTHQUAKE-INDUCED BUILDING DETECTION BASED ON OBJECT-LEVEL TEXTURE FEATURE CHANGE DETECTION OF MULTI-TEMPORAL SAR IMAGES

2018 ◽  
Vol 24 (4) ◽  
pp. 442-458 ◽  
Author(s):  
Qiang Li ◽  
Lixia Gong ◽  
Jingfa Zhang

Abstract The damage of buildings is the major cause of casualties of from earthquakes. The traditional pixel-based earthquake damaged building detection method is prone to be affected by speckle noise. In this paper, an object-based change detection method is presented for the detection of earthquake damage using the synthetic aperture radar (SAR) data. The method is based on object-level texture features of SAR data. Firstly, the principal component analysis is used to transform the optimal texture features into a suitable feature space for extracting the key change. And then, the feature space is clustered by the watershed segmentation algorithm, which introduces the concept of object orientation and carries out the calculation of the difference map at the object level. Having training samples, the classification threshold values for different grade of earthquake damage can be trained, and the detection of damaged building is achieved. The proposed method could visualize the earthquake damage efficiently using the Advanced Land Observing Satellite-1 (ALOS-1) images. Its performance is evaluated in the town of jiegu, which was hit severely by the Yushu Earthquake. The cross-validation results shows that the overall accuracy is significantly higher than TDCD and IDCD.

2020 ◽  
Vol 12 (18) ◽  
pp. 3057
Author(s):  
Nian Shi ◽  
Keming Chen ◽  
Guangyao Zhou ◽  
Xian Sun

With the development of remote sensing technologies, change detection in heterogeneous images becomes much more necessary and significant. The main difficulty lies in how to make input heterogeneous images comparable so that the changes can be detected. In this paper, we propose an end-to-end heterogeneous change detection method based on the feature space constraint. First, considering that the input heterogeneous images are in two distinct feature spaces, two encoders with the same structure are used to extract features, respectively. A decoder is used to obtain the change map from the extracted features. Then, the Gram matrices, which include the correlations between features, are calculated to represent different feature spaces, respectively. The squared Euclidean distance between Gram matrices, termed as feature space loss, is used to constrain the extracted features. After that, a combined loss function consisting of the binary cross entropy loss and feature space loss is designed for training the model. Finally, the change detection results between heterogeneous images can be obtained when the model is trained well. The proposed method can constrain the features of two heterogeneous images to the same feature space while keeping their unique features so that the comparability between features can be enhanced and better detection results can be achieved. Experiments on two heterogeneous image datasets consisting of optical and SAR images demonstrate the effectiveness and superiority of the proposed method.


2019 ◽  
Vol 11 (23) ◽  
pp. 2740 ◽  
Author(s):  
Bin Luo ◽  
Chudi Hu ◽  
Xin Su ◽  
Yajun Wang

Temporal analysis of synthetic aperture radar (SAR) time series is a basic and significant issue in the remote sensing field. Change detection as well as other interpretation tasks of SAR images always involves non-linear/non-convex problems. Complex (non-linear) change criteria or models have thus been proposed for SAR images, instead of direct difference (e.g., change vector analysis) with/without linear transform (e.g., Principal Component Analysis, Slow Feature Analysis) used in optical image change detection. In this paper, inspired by the powerful deep learning techniques, we present a deep autoencoder (AE) based non-linear subspace representation for unsupervised change detection with multi-temporal SAR images. The proposed architecture is built upon an autoencoder-like (AE-like) network, which non-linearly maps the input SAR data into a latent space. Unlike normal AE networks, a self-expressive layer performing like principal component analysis (PCA) is added between the encoder and the decoder, which further transforms the mapped SAR data to mutually orthogonal subspaces. To make the proposed architecture more efficient at change detection tasks, the parameters are trained to minimize the representation difference of unchanged pixels in the deep subspace. Thus, the proposed architecture is namely the Differentially Deep Subspace Representation (DDSR) network for multi-temporal SAR images change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed architecture.


2020 ◽  
Vol 12 (9) ◽  
pp. 1441 ◽  
Author(s):  
Lijun Huang ◽  
Ru An ◽  
Shengyin Zhao ◽  
Tong Jiang ◽  
Hao Hu

Very high-resolution remote sensing change detection has always been an important research issue due to the registration error, robustness of the method, and monitoring accuracy, etc. This paper proposes a robust and more accurate approach of change detection (CD), and it is applied on a smaller experimental area, and then extended to a wider range. A feature space, including object features, Visual Geometry Group (VGG) depth features, and texture features, is constructed. The difference image is obtained by considering the contextual information in a radius scalable circular. This is to overcome the registration error caused by the rotation and shift of the instantaneous field of view and also to improve the reliability and robustness of the CD. To enhance the robustness of the U-Net model, the training dataset is constructed manually via various operations, such as blurring the image, increasing noise, and rotating the image. After this, the trained model is used to predict the experimental areas, which achieved 92.3% accuracy. The proposed method is compared with Support Vector Machine (SVM) and Siamese Network, and the check error rate dropped to 7.86%, while the Kappa increased to 0.8254. The results revealed that our method outperforms SVM and Siamese Network.


2021 ◽  
Vol 13 (5) ◽  
pp. 833
Author(s):  
Lucas P. Ramos ◽  
Alexandre B. Campos ◽  
Christofer Schwartz ◽  
Leonardo T. Duarte ◽  
Dimas I. Alves ◽  
...  

Recently, it was demonstrated that low-frequency wavelength-resolution synthetic aperture radar (SAR) images could be considered to follow an additive mixing model due to their backscatter characteristics. This simplification allows for the use of source separation methods, such as robust principal component analysis (RPCA) via principal component pursuit (PCP), for detecting changes in those images. In this manuscript, a change detection method for wavelength-resolution SAR images based on image stack through RPCA is proposed. The method aims to explore both the temporal and flight heading diversity of a set of wavelength-resolution multitemporal SAR images in order to detect concealed targets in forestry areas. A heuristic based on three rules for better exploring the RPCA results is introduced, and a new configurable parameter for false alarm reduction based on the analysis of image windows is proposed. The method is evaluated using real data obtained from measurements of the ultrawideband (UWB) very high-frequency (VHF) SAR system CARABAS-II. Experiments for stacks of four and seven reference images are conducted, and the use of reference images acquired with different flight headings is explored. The results indicate that a gain in performance can be achieved by using large image stacks containing, at least, one image of each possible flight heading of the data set, which can result in a probability of detection (PD) above 99% for a false alarm rate (FAR) as low as one false alarm per three square kilometers. Furthermore, it is demonstrated that high PD and low FAR can be achieved, also considering images from similar flight headings as reference images.


2020 ◽  
Author(s):  
Karolina Nurzynska ◽  
Sebastian Iwaszenko

The segmentation of rock grains on images depicting bulk rock materials is considered. The rocks material images are transformed by selected texture operators, to obtain a set of features describing them. The first order features, second-order features, run-length matrix, grey tone difference matrix, and Laws' energies are used for that purpose. The features are classified using k-nearest neighbours, support vector machines, and artificial neural networks classifiers. The results show that the border of rocks grains can be determined with above 70% accuracy. The multi-texture approach was also investigated, leading to an increase in accuracy to over 77% for early-fusion of features. Attempts were made to reduce feature space dimensionality by manually picking features as well as by use of the principal component analysis. The outcomes showed a significant decrease in accuracy. The obtained results have been visually compared with the ground truth. The observed compliance can be considered satisfactory.


2012 ◽  
Vol E95.B (5) ◽  
pp. 1890-1893
Author(s):  
Wang LUO ◽  
Hongliang LI ◽  
Guanghui LIU ◽  
Guan GUI

Sign in / Sign up

Export Citation Format

Share Document