Object-based island green cover mapping by integrating UAV multispectral image and LiDAR data

2021 ◽  
Vol 15 (03) ◽  
Author(s):  
Hao Liu ◽  
Pengfeng Xiao ◽  
Xueliang Zhang ◽  
Xinghua Zhou ◽  
Jie Li ◽  
...  
Author(s):  
R. A. Loberternos ◽  
W. P. Porpetcho ◽  
J. C. A. Graciosa ◽  
R. R. Violanda ◽  
A. G. Diola ◽  
...  

Traditional remote sensing approach for mapping aquaculture ponds typically involves the use of aerial photography and high resolution images. The current study demonstrates the use of object-based image processing and analyses of LiDAR-data-generated derivative images with 1-meter resolution, namely: CHM (canopy height model) layer, DSM (digital surface model) layer, DTM (digital terrain model) layer, Hillshade layer, Intensity layer, NumRet (number of returns) layer, and Slope layer. A Canny edge detection algorithm was also performed on the Hillshade layer in order to create a new image (Canny layer) with more defined edges. These derivative images were then used as input layers to perform a multi-resolution segmentation algorithm best fit to delineate the aquaculture ponds. In order to extract the aquaculture pond feature, three major classes were identified for classification, including land, vegetation and water. Classification was first performed by using assign class algorithm to classify Flat Surfaces to segments with mean Slope values of 10 or lower. Out of these Flat Surfaces, assign class algorithm was then performed to determine Water feature by using a threshold value of 63.5. The segments identified as Water were then merged together to form larger bodies of water which comprises the aquaculture ponds. The present study shows that LiDAR data coupled with object-based classification can be an effective approach for mapping coastal aquaculture ponds. The workflow currently presented can be used as a model to map other areas in the Philippines where aquaculture ponds exist.


Author(s):  
Rudolph Joshua Candare ◽  
Michelle Japitana ◽  
James Earl Cubillas ◽  
Cherry Bryan Ramirez

This research describes the methods involved in the mapping of different high value crops in Agusan del Norte Philippines using LiDAR. This project is part of the Phil-LiDAR 2 Program which aims to conduct a nationwide resource assessment using LiDAR. Because of the high resolution data involved, the methodology described here utilizes object-based image analysis and the use of optimal features from LiDAR data and Orthophoto. Object-based classification was primarily done by developing rule-sets in eCognition. Several features from the LiDAR data and Orthophotos were used in the development of rule-sets for classification. Generally, classes of objects can't be separated by simple thresholds from different features making it difficult to develop a rule-set. To resolve this problem, the image-objects were subjected to Support Vector Machine learning. SVMs have gained popularity because of their ability to generalize well given a limited number of training samples. However, SVMs also suffer from parameter assignment issues that can significantly affect the classification results. More specifically, the regularization parameter C in linear SVM has to be optimized through cross validation to increase the overall accuracy. After performing the segmentation in eCognition, the optimization procedure as well as the extraction of the equations of the hyper-planes was done in Matlab. The learned hyper-planes separating one class from another in the multi-dimensional feature-space can be thought of as super-features which were then used in developing the classifier rule set in eCognition. In this study, we report an overall classification accuracy of greater than 90% in different areas.


2020 ◽  
Vol 12 (11) ◽  
pp. 1702 ◽  
Author(s):  
Thanh Huy Nguyen ◽  
Sylvie Daniel ◽  
Didier Guériot ◽  
Christophe Sintès ◽  
Jean-Marc Le Caillec

Automatic extraction of buildings in urban and residential scenes has become a subject of growing interest in the domain of photogrammetry and remote sensing, particularly since the mid-1990s. Active contour model, colloquially known as snake model, has been studied to extract buildings from aerial and satellite imagery. However, this task is still very challenging due to the complexity of building size, shape, and its surrounding environment. This complexity leads to a major obstacle for carrying out a reliable large-scale building extraction, since the involved prior information and assumptions on building such as shape, size, and color cannot be generalized over large areas. This paper presents an efficient snake model to overcome such a challenge, called Super-Resolution-based Snake Model (SRSM). The SRSM operates on high-resolution Light Detection and Ranging (LiDAR)-based elevation images—called z-images—generated by a super-resolution process applied to LiDAR data. The involved balloon force model is also improved to shrink or inflate adaptively, instead of inflating continuously. This method is applicable for a large scale such as city scale and even larger, while having a high level of automation and not requiring any prior knowledge nor training data from the urban scenes (hence unsupervised). It achieves high overall accuracy when tested on various datasets. For instance, the proposed SRSM yields an average area-based Quality of 86.57% and object-based Quality of 81.60% on the ISPRS Vaihingen benchmark datasets. Compared to other methods using this benchmark dataset, this level of accuracy is highly desirable even for a supervised method. Similarly desirable outcomes are obtained when carrying out the proposed SRSM on the whole City of Quebec (total area of 656 km2), yielding an area-based Quality of 62.37% and an object-based Quality of 63.21%.


2014 ◽  
Vol 8 (1) ◽  
pp. 083529 ◽  
Author(s):  
Diane M. Styers ◽  
L. Monika Moskal ◽  
Jeffrey J Richardson ◽  
Meghan A. Halabisky

Sign in / Sign up

Export Citation Format

Share Document