Change-detection accuracy assessment using SPOT multispectral imagery of the rural-urban fringe☆

1989 ◽  
Vol 30 (1) ◽  
pp. 55-66 ◽  
Author(s):  
L MARTIN ◽  
P HOWARTH
2018 ◽  
Vol 7 (11) ◽  
pp. 441 ◽  
Author(s):  
Zhenjin Zhou ◽  
Lei Ma ◽  
Tengyu Fu ◽  
Ge Zhang ◽  
Mengru Yao ◽  
...  

Despite increases in the spatial resolution of satellite imagery prompting interest in object-based image analysis, few studies have used object-based methods for monitoring changes in coral reefs. This study proposes a high accuracy object-based change detection (OBCD) method intended for coral reef environment, which uses QuickBird and WorldView-2 images. The proposed methodological framework includes image fusion, multi-temporal image segmentation, image differencing, random forests models, and object-area-based accuracy assessment. For validation, we applied the method to images of four coral reef study sites in the South China Sea. We compared the proposed OBCD method with a conventional pixel-based change detection (PBCD) method by implementing both methods under the same conditions. The average overall accuracy of OBCD exceeded 90%, which was approximately 20% higher than PBCD. The OBCD method was free from salt-and-pepper effects and was less prone to images misregistration in terms of change detection accuracy and mapping results. The object-area-based accuracy assessment reached a higher overall accuracy and per-class accuracy than the object-number-based and pixel-number-based accuracy assessment.


2006 ◽  
Vol 27 (4) ◽  
pp. 218-228 ◽  
Author(s):  
Paul Rodway ◽  
Karen Gillies ◽  
Astrid Schepman

This study examined whether individual differences in the vividness of visual imagery influenced performance on a novel long-term change detection task. Participants were presented with a sequence of pictures, with each picture and its title displayed for 17  s, and then presented with changed or unchanged versions of those pictures and asked to detect whether the picture had been changed. Cuing the retrieval of the picture's image, by presenting the picture's title before the arrival of the changed picture, facilitated change detection accuracy. This suggests that the retrieval of the picture's representation immunizes it against overwriting by the arrival of the changed picture. The high and low vividness participants did not differ in overall levels of change detection accuracy. However, in replication of Gur and Hilgard (1975) , high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low vividness participants. The results suggest that vivid images are not characterised by a high level of detail and that vivid imagery enhances memory for the salient aspects of a scene but not all of the details of a scene. Possible causes of this difference, and how they may lead to an understanding of individual differences in change detection, are considered.


Author(s):  
Antonio Prieto ◽  
Vanesa Peinado ◽  
Julia Mayas

AbstractVisual working memory has been defined as a system of limited capacity that enables the maintenance and manipulation of visual information. However, some perceptual features like Gestalt grouping could improve visual working memory effectiveness. In two different experiments, we aimed to explore how the presence of elements grouped by color similarity affects the change detection performance of both, grouped and non-grouped items. We combined a change detection task with a retrocue paradigm in which a six item array had to be remembered. An always valid, variable-delay retrocue appeared in some trials during the retention interval, either after 100 ms (iconic-trace period) or 1400 ms (working memory period), signaling the location of the probe. The results indicated that similarity grouping biased the information entered into the visual working memory, improving change detection accuracy only for previously grouped probes, but hindering change detection for non-grouped probes in certain conditions (Exp. 1). However, this bottom-up automatic encoding bias was overridden when participants were explicitly instructed to ignore grouped items as they were irrelevant for the task (Exp. 2).


Author(s):  
S. Su ◽  
T. Nawata ◽  
T. Fuse

Abstract. Automatic building change detection has become a topical issue owing to its wide range of applications, such as updating building maps. However, accurate building change detection remains challenging, particularly in urban areas. Thus far, there has been limited research on the use of the outdated building map (the building map before the update, referred to herein as the old-map) to increase the accuracy of building change detection. This paper presents a novel deep-learning-based method for building change detection using bitemporal aerial images containing RGB bands, bitemporal digital surface models (DSMs), and an old-map. The aerial images have two types of spatial resolutions, 12.5 cm or 16 cm, and the cell size of the DSMs is 50 cm × 50 cm. The bitemporal aerial images, the height variations calculated using the differences between the bitemporal DSMs, and the old-map were fed into a network architecture to build an automatic building change detection model. The performance of the model was quantitatively and qualitatively evaluated for an urban area that covered approximately 10 km2 and contained over 21,000 buildings. The results indicate that it can detect the building changes with optimum accuracy as compared to other methods that use inputs such as i) bitemporal aerial images only, ii) bitemporal aerial images and bitemporal DSMs, and iii) bitemporal aerial images and an old-map. The proposed method achieved recall rates of 89.3%, 88.8%, and 99.5% for new, demolished, and other buildings, respectively. The results also demonstrate that the old-map is an effective data source for increasing building change detection accuracy.


Author(s):  
X. Shi ◽  
L. Lu ◽  
S. Yang ◽  
G. Huang ◽  
Z. Zhao

For wide application of change detection with SAR imagery, current processing technologies and methods are mostly based on pixels. It is difficult for pixel-based technologies to utilize spatial characteristics of images and topological relations of objects. Object-oriented technology takes objects as processing unit, which takes advantage of the shape and texture information of image. It can greatly improve the efficiency and reliability of change detection. Recently, with the development of polarimetric synthetic aperture radar (PolSAR), more backscattering features on different polarization state can be available for usage of object-oriented change detection study. In this paper, the object-oriented strategy will be employed. Considering the fact that the different target or target's state behaves different backscattering characteristics dependent on polarization state, an object-oriented change detection method that based on weighted polarimetric scattering difference of PolSAR images is proposed. The method operates on the objects generated by generalized statistical region merging (GSRM) segmentation processing. The merit of GSRM method is that image segmentation is executed on polarimetric coherence matrix, which takes full advantages of polarimetric backscattering features. And then, the measurement of polarimetric scattering difference is constructed by combining the correlation of covariance matrix and the difference of scattering power. Through analysing the effects of the covariance matrix correlation and the scattering echo power difference on the polarimetric scattering difference, the weighted method is used to balance the influences caused by the two parts, so that more reasonable weights can be chosen to decrease the false alarm rate. The effectiveness of the algorithm that proposed in this letter is tested by detection of the growth of crops with two different temporal radarsat-2 fully PolSAR data. First, objects are produced by GSRM algorithm based on the coherent matrix in the pre-processing. Then, the corresponding patches are extracted in two temporal images to measure the differences of objects. To detect changes of patches, a difference map is created by means of weighted polarization scattering difference. Finally, the result of change detection can be obtained by threshold determining. The experiments show that this approach is feasible and effective, and a reasonable choice of weights can improve the detection accuracy significantly.


2010 ◽  
Vol 2 (6) ◽  
pp. 1508-1529 ◽  
Author(s):  
Abdullah Almutairi ◽  
Timothy A. Warner

2018 ◽  
Vol 10 (9) ◽  
pp. 3301 ◽  
Author(s):  
Honglyun Park ◽  
Jaewan Choi ◽  
Wanyong Park ◽  
Hyunchun Park

This study aims to reduce the false alarm rate due to relief displacement and seasonal effects of high-spatial-resolution multitemporal satellite images in change detection algorithms. Cross-sharpened images were used to increase the accuracy of unsupervised change detection results. A cross-sharpened image is defined as a combination of synthetically pan-sharpened images obtained from the pan-sharpening of multitemporal images (two panchromatic and two multispectral images) acquired before and after the change. A total of four cross-sharpened images were generated and used in combination for change detection. Sequential spectral change vector analysis (S2CVA), which comprises the magnitude and direction information of the difference image of the multitemporal images, was applied to minimize the false alarm rate using cross-sharpened images. Specifically, the direction information of S2CVA was used to minimize the false alarm rate when applying S2CVA algorithms to cross-sharpened images. We improved the change detection accuracy by integrating the magnitude and direction information obtained using S2CVA for the cross-sharpened images. In the experiment using KOMPSAT-2 satellite imagery, the false alarm rate of the change detection results decreased with the use of cross-sharpened images compared to that with the use of only the magnitude information from the original S2CVA.


2021 ◽  
Author(s):  
Shuren Chou

<p>Deep learning has a good capacity of hierarchical feature learning from unlabeled remote sensing images. In this study, the simple linear iterative clustering (SLIC) method was improved to segment the image into good quality super-pixels. Then, we used the convolutional neural network (CNN) to extract of water bodies from Sentinel-2 MSI data using deep learning technique. In the proposed framework, the improved SLIC method obtained the correct water bodies boundary by optimizing the initial clustering center, designing a dynamic distance measure, and expanding the search space. In addition, it is different from traditional extraction of water bodies methods that cannot achieve multi-level water bodies detection. Experimental results showed that this method had higher detection accuracy and robustness than other methods. This study was able to extract water bodies from remotely sensed images with deep learning and to conduct accuracy assessment.</p>


Sign in / Sign up

Export Citation Format

Share Document