scholarly journals Object-Based Features for House Detection from RGB High-Resolution Images

2018 ◽  
Vol 10 (3) ◽  
pp. 451 ◽  
Author(s):  
Renxi Chen ◽  
Xinhui Li ◽  
Jonathan Li
Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 320
Author(s):  
Emilio Guirado ◽  
Javier Blanco-Sacristán ◽  
Emilio Rodríguez-Caballero ◽  
Siham Tabik ◽  
Domingo Alcaraz-Segura ◽  
...  

Vegetation generally appears scattered in drylands. Its structure, composition and spatial patterns are key controls of biotic interactions, water, and nutrient cycles. Applying segmentation methods to very high-resolution images for monitoring changes in vegetation cover can provide relevant information for dryland conservation ecology. For this reason, improving segmentation methods and understanding the effect of spatial resolution on segmentation results is key to improve dryland vegetation monitoring. We explored and analyzed the accuracy of Object-Based Image Analysis (OBIA) and Mask Region-based Convolutional Neural Networks (Mask R-CNN) and the fusion of both methods in the segmentation of scattered vegetation in a dryland ecosystem. As a case study, we mapped Ziziphus lotus, the dominant shrub of a habitat of conservation priority in one of the driest areas of Europe. Our results show for the first time that the fusion of the results from OBIA and Mask R-CNN increases the accuracy of the segmentation of scattered shrubs up to 25% compared to both methods separately. Hence, by fusing OBIA and Mask R-CNNs on very high-resolution images, the improved segmentation accuracy of vegetation mapping would lead to more precise and sensitive monitoring of changes in biodiversity and ecosystem services in drylands.


Author(s):  
M. Boldt ◽  
A. Thiele ◽  
K. Schulz ◽  
S. Hinz

In the last years, the spatial resolution of remote sensing sensors and imagery has continuously improved. Focusing on spaceborne Synthetic Aperture Radar (SAR) sensors, the satellites of the current generation (TerraSAR-X, COSMO-SykMed) are able to acquire images with sub-meter resolution. Indeed, high resolution imagery is visually much better interpretable, but most of the established pixel-based analysis methods have become more or less impracticable since, in high resolution images, self-sufficient objects (vehicle, building) are represented by a large number of pixels. Methods dealing with Object-Based Image Analysis (OBIA) provide help. Objects (segments) are groupings of pixels resulting from image segmentation algorithms based on homogeneity criteria. The image set is represented by image segments, which allows the development of rule-based analysis schemes. For example, segments can be described or categorized by their local neighborhood in a context-based manner. <br><br> In this paper, a novel method for the segmentation of high resolution SAR images is presented. It is based on the calculation of morphological differential attribute profiles (DAP) which are analyzed pixel-wise in a region growing procedure. The method distinguishes between heterogeneous and homogeneous image content and delivers a precise segmentation result.


2020 ◽  
Vol 163 ◽  
pp. 171-186 ◽  
Author(s):  
Zhen Guan ◽  
Amr Abd-Elrahman ◽  
Zhen Fan ◽  
Vance M. Whitaker ◽  
Benjamin Wilkinson

2018 ◽  
Vol 27 (10) ◽  
pp. 699 ◽  
Author(s):  
Melanie K. Vanderhoof ◽  
Clifton Burt ◽  
Todd J. Hawbaker

Interpretations of post-fire condition and rates of vegetation recovery can influence management priorities, actions and perception of latent risks from landslides and floods. In this study, we used the Waldo Canyon fire (2012, Colorado Springs, Colorado, USA) as a case study to explore how a time series (2011–2016) of high-resolution images can be used to delineate burn extent and severity, as well as quantify post-fire vegetation recovery. We applied an object-based approach to map burn severity and vegetation recovery using Worldview-2, Worldview-3 and QuickBird-2 imagery. The burned area was classified as 51% high, 20% moderate and 29% low burn-severity. Across the burn extent, the shrub cover class showed a rapid recovery, resprouting vigorously within 1 year, whereas 4 years post-fire, areas previously dominated by conifers were divided approximately equally between being classified as dominated by quaking aspen saplings with herbaceous species in the understorey or minimally recovered. Relative to using a pixel-based Normalised Difference Vegetation Index (NDVI), our object-based approach showed higher rates of revegetation. High-resolution imagery can provide an effective means to monitor post-fire site conditions and complement more prevalent efforts with moderate- and coarse-resolution sensors.


Author(s):  
M. Barzegar ◽  
H. Ebadi ◽  
A. Kiani

Today digital aerial images acquired with UltraCam sensor are known to be a valuable resource for producing high resolution information of land covers. In this research, different methods for extracting vegetation from semi-urban and agricultural regions were studied and their results were compared in terms of overall accuracy and Kappa statistic. To do this, several vegetation indices were first tested on three image datasets with different object-based classifications in terms of presence or absence of sample data, defining other features and also more classes. The effects of all these cases were evaluated on final results. After it, pixel-based classification was performed on each dataset and their accuracies were compared to optimum object-based classification. The importance of this research is to test different indices in several cases (about 75 cases) and to find the quantitative and qualitative effects of increasing or decreasing auxiliary data. This way, researchers who intent to work with such high resolution data are given an insight on the whole procedure of detecting vegetation species as one of the outstanding and common features from such images. Results showed that DVI index can better detect vegetation regions in test images. Also, the object-based classification with average 93.6% overall accuracy and 86.5% Kappa was more suitable for extracting vegetation rather than the pixel-based classification with average 81.2% overall accuracy and 59.7% Kappa.


2021 ◽  
Vol 13 (18) ◽  
pp. 3660
Author(s):  
Sejung Jung ◽  
Won Hee Lee ◽  
Youkyung Han

Building change detection is a critical field for monitoring artificial structures using high-resolution multitemporal images. However, relief displacement depending on the azimuth and elevation angles of the sensor causes numerous false alarms and misdetections of building changes. Therefore, this study proposes an effective object-based building change detection method that considers azimuth and elevation angles of sensors in high-resolution images. To this end, segmentation images were generated using a multiresolution technique from high-resolution images after which object-based building detection was performed. For detecting building candidates, we calculated feature information that could describe building objects, such as rectangular fit, gray-level co-occurrence matrix (GLCM) homogeneity, and area. Final building detection was then performed considering the location relationship between building objects and their shadows using the Sun’s azimuth angle. Subsequently, building change detection of final building objects was performed based on three methods considering the relationship of the building object properties between the images. First, only overlaying objects between images were considered to detect changes. Second, the size difference between objects according to the sensor’s elevation angle was considered to detect the building changes. Third, the direction between objects according to the sensor’s azimuth angle was analyzed to identify the building changes. To confirm the effectiveness of the proposed object-based building change detection performance, two building density areas were selected as study sites. Site 1 was constructed using a single sensor of KOMPSAT-3 bitemporal images, whereas Site 2 consisted of multi-sensor images of KOMPSAT-3 and unmanned aerial vehicle (UAV). The results from both sites revealed that considering additional shadow information showed more accurate building detection than using feature information only. Furthermore, the results of the three object-based change detections were compared and analyzed according to the characteristics of the study area and the sensors. Accuracy of the proposed object-based change detection results was achieved over the existing building detection methods.


Author(s):  
S. M. Li ◽  
Z. Y. Li ◽  
E. X. Chen ◽  
Q. W. Liu

Forest cover monitoring is an important part of forest management in local or regional area. The structure and tones of forest can be identified in high spatial remote sensing images. When forests cover change, the spectral characteristics of forests is also changed. In this paper a method on object-based forest cover monitoring with data transformation from time series of high resolution images is put forward. First the NDVI difference image and the composite of PC3,PC4, PC5 of the stacked 8 layers of time series of high resolution satellites are segmented into homogeneous objects. With development of the object-based ruleset classification system, the spatial extent of deforestation and afforestation can be identified over time across the landscape. Finally the change accuracy is achieved with reference data.


Sign in / Sign up

Export Citation Format

Share Document