scholarly journals Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

2014 ◽  
Vol 8 (1) ◽  
pp. 083529 ◽  
Author(s):  
Diane M. Styers ◽  
L. Monika Moskal ◽  
Jeffrey J Richardson ◽  
Meghan A. Halabisky
2021 ◽  
Vol 13 (10) ◽  
pp. 1868
Author(s):  
Martina Deur ◽  
Mateo Gašparović ◽  
Ivan Balenović

Quality tree species information gathering is the basis for making proper decisions in forest management. By applying new technologies and remote sensing methods, very high resolution (VHR) satellite imagery can give sufficient spatial detail to achieve accurate species-level classification. In this study, the influence of pansharpening of the WorldView-3 (WV-3) satellite imagery on classification results of three main tree species (Quercus robur L., Carpinus betulus L., and Alnus glutinosa (L.) Geartn.) has been evaluated. In order to increase tree species classification accuracy, three different pansharpening algorithms (Bayes, RCS, and LMVM) have been conducted. The LMVM algorithm proved the most effective pansharpening technique. The pixel- and object-based classification were applied to three pansharpened imageries using a random forest (RF) algorithm. The results showed a very high overall accuracy (OA) for LMVM pansharpened imagery: 92% and 96% for tree species classification based on pixel- and object-based approach, respectively. As expected, the object-based exceeded the pixel-based approach (OA increased by 4%). The influence of fusion on classification results was analyzed as well. Overall classification accuracy was improved by the spatial resolution of pansharpened images (OA increased by 7% for pixel-based approach). Also, regardless of pixel- or object-based classification approaches, the influence of the use of pansharpening is highly beneficial to classifying complex, natural, and mixed deciduous forest areas.


2021 ◽  
Author(s):  
Ahmet Batuhan Polat ◽  
Ozgun Akcay ◽  
Fusun Balik Sanli

<p>Obtaining high accuracy in land cover classification is a non-trivial problem in geosciences for monitoring urban and rural areas. In this study, different classification algorithms were tested with different types of data, and besides the effects of seasonal changes on these classification algorithms and the evaluation of the data used are investigated. In addition, the effect of increasing classification training samples on classification accuracy has been revealed as a result of the study. Sentinel-1 Synthetic Aperture Radar (SAR) images and Sentinel-2 multispectral optical images were used as datasets. Object-based approach was used for the classification of various fused image combinations. The classification algorithms Support Vector Machines (SVM), Random Forest (RF) and K-Nearest Neighborhood (kNN) methods were used for this process. In addition, Normalized Difference Vegetation Index (NDVI) was examined separately to define the exact contribution to the classification accuracy.  As a result, the overall accuracies were compared by classifying the fused data generated by combining optical and SAR images. It has been determined that the increase in the number of training samples improve the classification accuracy. Moreover, it was determined that the object-based classification obtained from single SAR imagery produced the lowest classification accuracy among the used different dataset combinations in this study. In addition, it has been shown that NDVI data does not increase the accuracy of the classification in the winter season as the trees shed their leaves due to climate conditions.</p>


Author(s):  
R. A. Loberternos ◽  
W. P. Porpetcho ◽  
J. C. A. Graciosa ◽  
R. R. Violanda ◽  
A. G. Diola ◽  
...  

Traditional remote sensing approach for mapping aquaculture ponds typically involves the use of aerial photography and high resolution images. The current study demonstrates the use of object-based image processing and analyses of LiDAR-data-generated derivative images with 1-meter resolution, namely: CHM (canopy height model) layer, DSM (digital surface model) layer, DTM (digital terrain model) layer, Hillshade layer, Intensity layer, NumRet (number of returns) layer, and Slope layer. A Canny edge detection algorithm was also performed on the Hillshade layer in order to create a new image (Canny layer) with more defined edges. These derivative images were then used as input layers to perform a multi-resolution segmentation algorithm best fit to delineate the aquaculture ponds. In order to extract the aquaculture pond feature, three major classes were identified for classification, including land, vegetation and water. Classification was first performed by using assign class algorithm to classify Flat Surfaces to segments with mean Slope values of 10 or lower. Out of these Flat Surfaces, assign class algorithm was then performed to determine Water feature by using a threshold value of 63.5. The segments identified as Water were then merged together to form larger bodies of water which comprises the aquaculture ponds. The present study shows that LiDAR data coupled with object-based classification can be an effective approach for mapping coastal aquaculture ponds. The workflow currently presented can be used as a model to map other areas in the Philippines where aquaculture ponds exist.


Author(s):  
Gordana Kaplan ◽  
Ugur Avdan

Wetlands benefits can be summarized but are not limited to their ability to store floodwaters and improve water quality, providing habitats for wildlife and supporting biodiversity, as well as aesthetic values. Over the past few decades, remote sensing and geographical information technologies has proven to be a useful and frequent applications in monitoring and mapping wetlands. Combining both optical and microwave satellite data can give significant information about the biophysical characteristics of wetlands and wetlands` vegetation. Also, fusing data from different sensors, such as radar and optical remote sensing data, can increase the wetland classification accuracy. In this paper we investigate the ability of fusion two fine spatial resolution satellite data, Sentinel-2 and the Synthetic Aperture Radar Satellite, Sentinel-1, for mapping wetlands. As a study area in this paper, Balikdami wetland located in the Anatolian part of Turkey has been selected. Both Sentinel-1 and Sentinel-2 images require pre-processing before their use. After the pre-processing, several vegetation indices calculated from the Sentinel-2 bands were included in the data set. Furthermore, an object-based classification was performed. For the accuracy assessment of the obtained results, number of random points were added over the study area. In addition, the results were compared with data from Unmanned Aerial Vehicle collected on the same data of the overpass of the Sentinel-2, and three days before the overpass of Sentinel-1 satellite. The accuracy assessment showed that the results significant and satisfying in the wetland classification using both multispectral and microwave data. The statistical results of the fusion of the optical and radar data showed high wetland mapping accuracy, with an overall classification accuracy of approximately 90% in the object-based classification. Compared with the high resolution UAV data, the classification results give promising results for mapping and monitoring not just wetlands, but also the sub-classes of the study area. For future research, multi-temporal image use and terrain data collection are recommended.


Author(s):  
T. Kavzoglu ◽  
M. Yildiz Erdemir ◽  
H. Tonbul

Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.


Author(s):  
Rudolph Joshua Candare ◽  
Michelle Japitana ◽  
James Earl Cubillas ◽  
Cherry Bryan Ramirez

This research describes the methods involved in the mapping of different high value crops in Agusan del Norte Philippines using LiDAR. This project is part of the Phil-LiDAR 2 Program which aims to conduct a nationwide resource assessment using LiDAR. Because of the high resolution data involved, the methodology described here utilizes object-based image analysis and the use of optimal features from LiDAR data and Orthophoto. Object-based classification was primarily done by developing rule-sets in eCognition. Several features from the LiDAR data and Orthophotos were used in the development of rule-sets for classification. Generally, classes of objects can't be separated by simple thresholds from different features making it difficult to develop a rule-set. To resolve this problem, the image-objects were subjected to Support Vector Machine learning. SVMs have gained popularity because of their ability to generalize well given a limited number of training samples. However, SVMs also suffer from parameter assignment issues that can significantly affect the classification results. More specifically, the regularization parameter C in linear SVM has to be optimized through cross validation to increase the overall accuracy. After performing the segmentation in eCognition, the optimization procedure as well as the extraction of the equations of the hyper-planes was done in Matlab. The learned hyper-planes separating one class from another in the multi-dimensional feature-space can be thought of as super-features which were then used in developing the classifier rule set in eCognition. In this study, we report an overall classification accuracy of greater than 90% in different areas.


Author(s):  
Jati Pratomo ◽  
Monika Kuffer ◽  
Javier Martinez ◽  
Divyani Kohli

Object-Based Image Analysis (OBIA) has been successfully used to map slums. In general, the occurrence of uncertainties in producing geographic data is inevitable. However, most studies concentrated solely on assessing the classification accuracy and neglecting the inherent uncertainties. Our research analyses the impact of uncertainties in measuring the accuracy of OBIA-based slum detection. We selected Jakarta as our case study area, because of a national policy of slum eradication, which is causing rapid changes in slum areas. Our research comprises of four parts: slum conceptualization, ruleset development, implementation, and accuracy and uncertainty measurements. Existential and extensional uncertainty arise when producing reference data. The comparison of a manual expert delineations of slums with OBIA slum classification results into four combinations: True Positive, False Positive, True Negative and False Negative. However, the higher the True Positive (which lead to a better accuracy), the lower the certainty of the results. This demonstrates the impact of extensional uncertainties. Our study also demonstrates the role of non-observable indicators (i.e., land tenure), to assist slum detection, particularly in areas where uncertainties exist. In conclusion, uncertainties are increasing when aiming to achieve a higher classification accuracy by matching manual delineation and OBIA classification.


Sign in / Sign up

Export Citation Format

Share Document