Detection of Potential Vernal Pools on the Canadian Shield (Ontario) Using Object-Based Image Analysis in Combination with Machine Learning

Author(s):  
Nick Luymes ◽  
Patricia Chow-Fraser
2019 ◽  
Vol 11 (10) ◽  
pp. 1181 ◽  
Author(s):  
Norman Kerle ◽  
Markus Gerke ◽  
Sébastien Lefèvre

The 6th biennial conference on object-based image analysis—GEOBIA 2016—took place in September 2016 at the University of Twente in Enschede, The Netherlands (see www [...]


2019 ◽  
Vol 11 (5) ◽  
pp. 503 ◽  
Author(s):  
Sachit Rajbhandari ◽  
Jagannath Aryal ◽  
Jon Osborn ◽  
Arko Lucieer ◽  
Robert Musk

Ontology-driven Geographic Object-Based Image Analysis (O-GEOBIA) contributes to the identification of meaningful objects. In fusing data from multiple sensors, the number of feature variables is increased and object identification becomes a challenging task. We propose a methodological contribution that extends feature variable characterisation. This method is illustrated with a case study in forest-type mapping in Tasmania, Australia. Satellite images, airborne LiDAR (Light Detection and Ranging) and expert photo-interpretation data are fused for feature extraction and classification. Two machine learning algorithms, Random Forest and Boruta, are used to identify important and relevant feature variables. A variogram is used to describe textural and spatial features. Different variogram features are used as input for rule-based classifications. The rule-based classifications employ (i) spectral features, (ii) vegetation indices, (iii) LiDAR, and (iv) variogram features, and resulted in overall classification accuracies of 77.06%, 78.90%, 73.39% and 77.06% respectively. Following data fusion, the use of combined feature variables resulted in a higher classification accuracy (81.65%). Using relevant features extracted from the Boruta algorithm, the classification accuracy is further improved (82.57%). The results demonstrate that the use of relevant variogram features together with spectral and LiDAR features resulted in improved classification accuracy.


2021 ◽  
Vol 13 (5) ◽  
pp. 937
Author(s):  
Payam Najafi ◽  
Bakhtiar Feizizadeh ◽  
Hossein Navid

Conservation tillage methods through leaving the crop residue cover (CRC) on the soil surface protect it from water and wind erosions. Hence, the percentage of the CRC on the soil surface is very critical for the evaluation of tillage intensity. The objective of this study was to develop a new methodology based on the semiautomated fuzzy object based image analysis (fuzzy OBIA) and compare its efficiency with two machine learning algorithms which include: support vector machine (SVM) and artificial neural network (ANN) for the evaluation of the previous CRC and tillage intensity. We also considered the spectral images from two remotely sensed platforms of the unmanned aerial vehicle (UAV) and Sentinel-2 satellite, respectively. The results indicated that fuzzy OBIA for multispectral Sentinel-2 image based on Gaussian membership function with overall accuracy and Cohen’s kappa of 0.920 and 0.874, respectively, surpassed machine learning algorithms and represented the useful results for the classification of tillage intensity. The results also indicated that overall accuracy and Cohen’s kappa for the classification of RGB images from the UAV using fuzzy OBIA method were 0.860 and 0.779, respectively. The semiautomated fuzzy OBIA clearly outperformed machine learning approaches in estimating the CRC and the classification of the tillage methods and also it has the potential to substitute or complement field techniques.


2014 ◽  
Vol 84 ◽  
pp. 107-119 ◽  
Author(s):  
Markus Diesing ◽  
Sophie L. Green ◽  
David Stephens ◽  
R. Murray Lark ◽  
Heather A. Stewart ◽  
...  

Author(s):  
M. Debella-Gilo ◽  
B. T. Borchsenius ◽  
K. Bjørkelo ◽  
J. Breidenbach

Abstract. Planning sustainable use of land resources and environmental monitoring benefit from accurate and detailed forest information. The basis of accurate forest information is data on the spatial extent of forests. In Norway land resource maps have been carefully created by field visits and aerial image interpretation for over four decades with periodic updating. However, due to prioritization of agricultural and built-up areas, and high requirements with respect to the map accuracy, forest areas and outfields have not been frequently updated. Consequently, in some part of the country, the map has not been updated since its first creation in the 1960s. The Sentinel-2 satellite acquires images with high spatial and temporal resolution which provides opportunities for creating cloud-free mosaic images over areas that are often covered with clouds. Here, we combine object-based image analysis with machine learning methods in an automated framework to map forest area in Sentinel-2 mosaic images. The images are segmented using the eCogntion™ software. Training data are collected automatically from the existing land resource map and filtered using height and greenness information so that the training samples certainly represent their respective classes. Two machine learning algorithms, namely Random Forest (RF) and the Multilayer Perceptron Neural Network (MLP), are then trained and validated before mapping forest area. The effects of including and excluding some features on the classification accuracy is investigated. The results show that the method produces forest cover map at very high accuracy (up to 97%). The MLP performs better than the RF algorithm both in classification accuracy and in robustness against inclusion and exclusion of features.


2021 ◽  
Vol 193 (2) ◽  
Author(s):  
Jens Oldeland ◽  
Rasmus Revermann ◽  
Jona Luther-Mosebach ◽  
Tillmann Buttschardt ◽  
Jan R. K. Lehmann

AbstractPlant species that negatively affect their environment by encroachment require constant management and monitoring through field surveys. Drones have been suggested to support field surveyors allowing more accurate mapping with just-in-time aerial imagery. Furthermore, object-based image analysis tools could increase the accuracy of species maps. However, only few studies compare species distribution maps resulting from traditional field surveys and object-based image analysis using drone imagery. We acquired drone imagery for a saltmarsh area (18 ha) on the Hallig Nordstrandischmoor (Germany) with patches of Elymus athericus, a tall grass which encroaches higher parts of saltmarshes. A field survey was conducted afterwards using the drone orthoimagery as a baseline. We used object-based image analysis (OBIA) to segment CIR imagery into polygons which were classified into eight land cover classes. Finally, we compared polygons of the field-based and OBIA-based maps visually and for location, area, and overlap before and after post-processing. OBIA-based classification yielded good results (kappa = 0.937) and agreed in general with the field-based maps (field = 6.29 ha, drone = 6.22 ha with E. athericus dominance). Post-processing revealed 0.31 ha of misclassified polygons, which were often related to water runnels or shadows, leaving 5.91 ha of E. athericus cover. Overlap of both polygon maps was only 70% resulting from many small patches identified where E. athericus was absent. In sum, drones can greatly support field surveys in monitoring of plant species by allowing for accurate species maps and just-in-time captured very-high-resolution imagery.


Sign in / Sign up

Export Citation Format

Share Document