Synergy of very high resolution optical and radar data for object-based olive grove mapping

2011 ◽  
Vol 25 (6) ◽  
pp. 971-989 ◽  
Author(s):  
Jan Peters ◽  
Frieke Van Coillie ◽  
Toon Westra ◽  
Robert De Wulf
Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 320
Author(s):  
Emilio Guirado ◽  
Javier Blanco-Sacristán ◽  
Emilio Rodríguez-Caballero ◽  
Siham Tabik ◽  
Domingo Alcaraz-Segura ◽  
...  

Vegetation generally appears scattered in drylands. Its structure, composition and spatial patterns are key controls of biotic interactions, water, and nutrient cycles. Applying segmentation methods to very high-resolution images for monitoring changes in vegetation cover can provide relevant information for dryland conservation ecology. For this reason, improving segmentation methods and understanding the effect of spatial resolution on segmentation results is key to improve dryland vegetation monitoring. We explored and analyzed the accuracy of Object-Based Image Analysis (OBIA) and Mask Region-based Convolutional Neural Networks (Mask R-CNN) and the fusion of both methods in the segmentation of scattered vegetation in a dryland ecosystem. As a case study, we mapped Ziziphus lotus, the dominant shrub of a habitat of conservation priority in one of the driest areas of Europe. Our results show for the first time that the fusion of the results from OBIA and Mask R-CNN increases the accuracy of the segmentation of scattered shrubs up to 25% compared to both methods separately. Hence, by fusing OBIA and Mask R-CNNs on very high-resolution images, the improved segmentation accuracy of vegetation mapping would lead to more precise and sensitive monitoring of changes in biodiversity and ecosystem services in drylands.


Author(s):  
L. Hang ◽  
G. Y. Cai

Abstract. The detection and reconstruction of building have attracted more attention in the community of remote sensing and computer vision. Light detection and ranging (LiDAR) has been proved to be a good way to extract building roofs, while we have to face the problem of data shortage for most of the time. In this paper, we tried to extract the building roofs from very high resolution (VHR) images of Chinese satellite Gaofen-2 by employing convolutional neural network (CNN). It has been proved that the CNN is of a higher capability of recognizing detailed features which may not be classified out by object-based classification approach. Several major steps are concerned in this study, such as generation of training dataset, model training, image segmentation and building roofs recognition. First, urban objects such as trees, roads, squares and buildings were classified based on random forest algorithm by an object-oriented classification approach, the building regions were separated from other classes at the aid of visually interpretation and correction; Next, different types of building roofs mainly categorized by color and size information were trained using the trained CNN. Finally, the industrial and residential building roofs have been recognized individually and the results have been validated individually. The assessment results prove effectiveness of the proposed method with approximately 91% and 88% of quality rates in detection industrial and residential building roofs, respectively. Which means that the CNN approach is prospecting in detecting buildings with a very higher accuracy.


2020 ◽  
Vol 12 (6) ◽  
pp. 983 ◽  
Author(s):  
Youkyung Han ◽  
Aisha Javed ◽  
Sejung Jung ◽  
Sicong Liu

Change detection (CD), one of the primary applications of multi-temporal satellite images, is the process of identifying changes in the Earth’s surface occurring over a period of time using images of the same geographic area on different dates. CD is divided into pixel-based change detection (PBCD) and object-based change detection (OBCD). Although PBCD is more popular due to its simple algorithms and relatively easy quantitative analysis, applying this method in very high resolution (VHR) images often results in misdetection or noise. Because of this, researchers have focused on extending the PBCD results to the OBCD map in VHR images. In this paper, we present a proposed weighted Dempster-Shafer theory (wDST) fusion method to generate the OBCD by combining multiple PBCD results. The proposed wDST approach automatically calculates and assigns a certainty weight for each object of the PBCD result while considering the stability of the object. Moreover, the proposed wDST method can minimize the tendency of the number of changed objects to decrease or increase based on the ratio of changed pixels to the total pixels in the image when the PBCD result is extended to the OBCD result. First, we performed co-registration between the VHR multitemporal images to minimize the geometric dissimilarity. Then, we conducted the image segmentation of the co-registered pair of multitemporal VHR imagery. Three change intensity images were generated using change vector analysis (CVA), iteratively reweighted-multivariate alteration detection (IRMAD), and principal component analysis (PCA). These three intensity images were exploited to generate different binary PBCD maps, after which the maps were fused with the segmented image using the wDST to generate the OBCD map. Finally, the accuracy of the proposed CD technique was assessed by using a manually digitized map. Two VHR multitemporal datasets were used to test the proposed approach. Experimental results confirmed the superiority of the proposed method by comparing the existing PBCD methods and the OBCD method using the majority voting technique.


Sign in / Sign up

Export Citation Format

Share Document