scholarly journals Spatial–Spectral Feature Fusion Coupled with Multi-Scale Segmentation Voting Decision for Detecting Land Cover Change with VHR Remote Sensing Images

2019 ◽  
Vol 11 (16) ◽  
pp. 1903 ◽  
Author(s):  
Zheng ◽  
Cao ◽  
Lv ◽  
Benediktsson

In this article, a novel approach for land cover change detection (LCCD) using very high resolution (VHR) remote sensing images based on spatial–spectral feature fusion and multi-scale segmentation voting decision is proposed. Unlike other traditional methods that have used a single feature without post-processing on a raw detection map, the proposed approach uses spatial–spectral features and post-processing strategies to improve detecting accuracies and performance. Our proposed approach involved two stages. First, we explored the spatial features of the VHR remote sensing image to complement the insufficiency of the spectral feature, and then fused the spatial–spectral features with different strategies. Next, the Manhattan distance between the corresponding spatial–spectral feature vectors of the bi-temporal images was employed to measure the change magnitude between the bi-temporal images and generate a change magnitude image (CMI). Second, the use of the Otsu binary threshold algorithm was proposed to divide the CMI into a binary change detection map (BCDM) and a multi-scale segmentation voting decision algorithm to fuse the initial BCDMs as the final change detection map was proposed. Experiments were carried out on three pairs of bi-temporal remote sensing images with VHR remote sensing images. The results were compared with those of the state-of-the-art methods including four popular contextual-based LCCD methods and three post-processing LCCD methods. Experimental comparisons demonstrated that the proposed approach had an advantage over other state-of-the-art techniques in terms of detection accuracies and performance.

2021 ◽  
Vol 13 (22) ◽  
pp. 4528
Author(s):  
Xin Yang ◽  
Lei Hu ◽  
Yongmei Zhang ◽  
Yunqing Li

Remote sensing image change detection (CD) is an important task in remote sensing image analysis and is essential for an accurate understanding of changes in the Earth’s surface. The technology of deep learning (DL) is becoming increasingly popular in solving CD tasks for remote sensing images. Most existing CD methods based on DL tend to use ordinary convolutional blocks to extract and compare remote sensing image features, which cannot fully extract the rich features of high-resolution (HR) remote sensing images. In addition, most of the existing methods lack robustness to pseudochange information processing. To overcome the above problems, in this article, we propose a new method, namely MRA-SNet, for CD in remote sensing images. Utilizing the UNet network as the basic network, the method uses the Siamese network to extract the features of bitemporal images in the encoder separately and perform the difference connection to better generate difference maps. Meanwhile, we replace the ordinary convolution blocks with Multi-Res blocks to extract spatial and spectral features of different scales in remote sensing images. Residual connections are used to extract additional detailed features. To better highlight the change region features and suppress the irrelevant region features, we introduced the Attention Gates module before the skip connection between the encoder and the decoder. Experimental results on a public dataset of remote sensing image CD show that our proposed method outperforms other state-of-the-art (SOTA) CD methods in terms of evaluation metrics and performance.


2020 ◽  
Vol 9 (7) ◽  
pp. 462
Author(s):  
Josephina Paul ◽  
B. Uma Shankar ◽  
Balaram Bhattacharyya

Change detection (CD) using Remote sensing images have been a challenging problem over the years. Particularly in the unsupervised domain it is even more difficult. A novel automatic change detection technique in the unsupervised framework is proposed to address the real challenges involved in remote sensing change detection. As the accuracy of change map is highly dependent on quality of difference image (DI), a set of Normalized difference images and a complementary set of Normalized Ratio images are fused in the Nonsubsampled Contourlet Transform (NSCT) domain to generate high quality difference images. The NSCT is chosen as it is efficient in suppressing noise by utilizing its unique characteristics such as multidirectionality and shift-invariance that are suitable for change detection. The low frequency sub bands are fused by averaging to combine the complementary information in the two DIs, and, the higher frequency sub bands are merged by minimum energy rule, for preserving the edges and salient features in the image. By employing a novel Particle Swarm Optimization algorithm with Leader Intelligence (LIPSO), change maps are generated from fused sub bands in two different ways: (i) single spectral band, and (ii) combination of spectral bands. In LIPSO, the concept of leader and followers has been modified with intelligent particles performing Lévy flight randomly for better exploration, to achieve global optima. The proposed method achieved an overall accuracy of 99.64%, 98.49% and 97.66% on the three datasets considered, which is very high. The results have been compared with relevant algorithms. The quantitative metrics demonstrate the superiority of the proposed techniques over the other methods and are found to be statistically significant with McNemar’s test. Visual quality of the results also corroborate the superiority of the proposed method.


Author(s):  
W. Yuan ◽  
X. Yuan ◽  
Z. Fan ◽  
Z. Guo ◽  
X. Shi ◽  
...  

Abstract. Building Change Detection (BCD) via multi-temporal remote sensing images is essential for various applications such as urban monitoring, urban planning, and disaster assessment. However, most building change detection approaches only extract features from different kinds of remote sensing images for change index determination, which can not determine the insignificant changes of small buildings. Given co-registered multi-temporal remote sensing images, the illumination variations and misregistration errors always lead to inaccurate change detection results. This study investigates the applicability of multi-feature fusion from both directly extract 2D features from remote sensing images and 3D features extracted by the dense image matching (DIM) generated 3D point cloud for accurate building change index generation. This paper introduces a graph neural network (GNN) based end-to-end learning framework for building change detection. The proposed framework includes feature extraction, feature fusion, and change index prediction. It starts with a pre-trained VGG-16 network as a backend and uses U-net architecture with five layers for feature map extraction. The extracted 2D features and 3D features are utilized as input into GNN based feature fusion parts. In the GNN parts, we introduce a flexible context aggregation mechanism based on attention to address the illumination variations and misregistration errors, enabling the framework to reason about the image-based texture information and depth information introduced by DIM generated 3D point cloud jointly. After that, the GNN generated affinity matrix is utilized for change index determination through a Hungarian algorithm. The experiment conducted on a dataset that covered Setagaya-Ku, Tokyo area, shows that the proposed method generated change map achieved the precision of 0.762 and the F1-score of 0.68 at pixel-level. Compared to traditional image-based change detection methods, our approach learns prior over geometrical structure information from the real 3D world, which robust to the misregistration errors. Compared to CNN based methods, the proposed method learns to fuse 2D and 3D features together to represent more comprehensive information for building change index determination. The experimental comparison results demonstrated that the proposed approach outperforms the traditional methods and CNN based methods.


2019 ◽  
Vol 40 (13) ◽  
pp. 4910-4933 ◽  
Author(s):  
Ming Hao ◽  
Wenzhong Shi ◽  
Yuanxin Ye ◽  
Hua Zhang ◽  
Kazhong Deng

2020 ◽  
Vol 12 (5) ◽  
pp. 872 ◽  
Author(s):  
Ronghua Shang ◽  
Jiyu Zhang ◽  
Licheng Jiao ◽  
Yangyang Li ◽  
Naresh Marturi ◽  
...  

Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of the extracted multi-scale features often fail to provide satisfactory results when there is a large difference in the target sizes. Handling this problem through multi-scale context extraction and efficient fusion of multi-scale features, in this paper we present an end-to-end multi-scale adaptive feature fusion network (MANet) for semantic segmentation in remote sensing images. It is a coding and decoding structure that includes a multi-scale context extraction module (MCM) and an adaptive fusion module (AFM). The MCM employs two layers of atrous convolutions with different dilatation rates and global average pooling to extract context information at multiple scales in parallel. MANet embeds the channel attention mechanism to fuse semantic features. The high- and low-level semantic information are concatenated to generate global features via global average pooling. These global features are used as channel weights to acquire adaptive weight information of each channel by the fully connected layer. To accomplish an efficient fusion, these tuned weights are applied to the fused features. Performance of the proposed method has been evaluated by comparing it with six other state-of-the-art networks: fully convolutional networks (FCN), U-net, UZ1, Light-weight RefineNet, DeepLabv3+, and APPD. Experiments performed using the publicly available Potsdam and Vaihingen datasets show that the proposed MANet significantly outperforms the other existing networks, with overall accuracy reaching 89.4% and 88.2%, respectively and with average of F1 reaching 90.4% and 86.7% respectively.


2021 ◽  
Vol 58 (2) ◽  
pp. 0228001
Author(s):  
马天浩 Ma Tianhao ◽  
谭海 Tan Hai ◽  
李天琪 Li Tianqi ◽  
吴雅男 Wu Yanan ◽  
刘祺 Liu Qi

2019 ◽  
Vol 56 (12) ◽  
pp. 121003
Author(s):  
金秋含 Qiuhan Jin ◽  
王阳萍 Yangping Wang ◽  
杨景玉 Jingyu Yang

Sign in / Sign up

Export Citation Format

Share Document