scholarly journals Extracting Fractional Vegetation Cover from Digital Photographs: A Comparison of In Situ, SamplePoint, and Image Classification Methods

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7310
Author(s):  
Xiaolei Yu ◽  
Xulin Guo

Fractional vegetation cover is a key indicator of rangeland health. However, survey techniques such as line-point intercept transect, pin frame quadrats, and visual cover estimates can be time-consuming and are prone to subjective variations. For this reason, most studies only focus on overall vegetation cover, ignoring variation in live and dead fractions. In the arid regions of the Canadian prairies, grass cover is typically a mixture of green and senescent plant material, and it is essential to monitor both green and senescent vegetation fractional cover. In this study, we designed and built a camera stand to acquire the close-range photographs of rangeland fractional vegetation cover. Photographs were processed by four approaches: SamplePoint software, object-based image analysis (OBIA), unsupervised and supervised classifications to estimate the fractional cover of green vegetation, senescent vegetation, and background substrate. These estimates were compared to in situ surveys. Our results showed that the SamplePoint software is an effective alternative to field measurements, while the unsupervised classification lacked accuracy and consistency. The Object-based image classification performed better than other image classification methods. Overall, SamplePoint and OBIA produced mean values equivalent to those produced by in situ assessment. These findings suggest an unbiased, consistent, and expedient alternative to in situ grassland vegetation fractional cover estimation, which provides a permanent image record.

2019 ◽  
Vol 11 (23) ◽  
pp. 2825 ◽  
Author(s):  
Claire Fisk ◽  
Kenneth Clarke ◽  
Megan Lewis

The collection of high-quality field measurements of ground cover is critical for calibration and validation of fractional ground cover maps derived from satellite imagery. Field-based hyperspectral ground cover sampling is a potential alternative to traditional in situ techniques. This study aimed to develop an effective sampling design for spectral ground cover surveys in order to estimate fractional ground cover in the Australian arid zone. To meet this aim, we addressed two key objectives: (1) Determining how spectral surveys and traditional step-point sampling compare when conducted at the same spatial scale and (2) comparing these two methods to current Australian satellite-derived fractional cover products. Across seven arid, sparsely vegetated survey sites, six 500-m transects were established. Ground cover reflectance was recorded taking continuous hyperspectral readings along each transect while step-point surveys were conducted along the same transects. Both measures of ground cover were converted into proportions of photosynthetic vegetation, non-photosynthetic vegetation, and bare soil for each site. Comparisons were made of the proportions of photosynthetic vegetation, non-photosynthetic vegetation, and bare soil derived from both in situ methods as well as MODIS and Landsat fractional cover products. We found strong correlations between fractional cover derived from hyperspectral and step-point sampling conducted at the same spatial scale at our survey sites. Comparison of the in situ measurements and image-derived fractional cover products showed that overall, the Landsat product was strongly related to both in situ methods for non-photosynthetic vegetation and bare soil whereas the MODIS product was strongly correlated with both in situ methods for photosynthetic vegetation. This study demonstrates the potential of the spectral transect method, both in its ability to produce results comparable to the traditional transect measures, but also in its improved objectivity and relative logistic ease. Future efforts should be made to include spectral ground cover sampling as part of Australia’s plan to produce calibration and validation datasets for remotely sensed products.


2022 ◽  
Vol 14 (2) ◽  
pp. 380
Author(s):  
Birgitta Putzenlechner ◽  
Philip Marzahn ◽  
Philipp Koal ◽  
Arturo Sánchez-Azofeifa

The fraction of absorbed photosynthetic active radiation (FAPAR) is an essential climate variable for assessing the productivity of ecosystems. Satellite remote sensing provides spatially distributed FAPAR products, but their accurate and efficient validation is challenging in forest environments. As the FAPAR is linked to the canopy structure, it may be approximated by the fractional vegetation cover (FCOVER) under the assumption that incoming radiation is either absorbed or passed through gaps in the canopy. With FCOVER being easier to retrieve, FAPAR validation activities could benefit from a priori information on FCOVER. Spatially distributed FCOVER is available from satellite remote sensing or can be retrieved from imagery of Unmanned Aerial Vehicles (UAVs) at a centimetric resolution. We investigated remote sensing-derived FCOVER as a proxy for in situ FAPAR in a dense mixed-coniferous forest, considering both absolute values and spatiotemporal variability. Therefore, direct FAPAR measurements, acquired with a Wireless Sensor Network, were related to FCOVER derived from UAV and Sentinel-2 (S2) imagery at different seasons. The results indicated that spatially aggregated UAV-derived FCOVER was close (RMSE = 0.02) to in situ FAPAR during the peak vegetation period when the canopy was almost closed. The S2 FCOVER product underestimated both the in situ FAPAR and UAV-derived FCOVER (RMSE > 0.3), which we attributed to the generic nature of the retrieval algorithm and the coarser resolution of the product. We concluded that UAV-derived FCOVER may be used as a proxy for direct FAPAR measurements in dense canopies. As another key finding, the spatial variability of the FCOVER consistently surpassed that of the in situ FAPAR, which was also well-reflected in the S2 FAPAR and FCOVER products. We recommend integrating this experimental finding as consistency criteria in the context of ECV quality assessments. To facilitate the FAPAR sampling activities, we further suggest assessing the spatial variability of UAV-derived FCOVER to benchmark sampling sizes for in situ FAPAR measurements. Finally, our study contributes to refining the FAPAR sampling protocols needed for the validation and improvement of FAPAR estimates in forest environments.


2011 ◽  
Vol 52 ◽  
Author(s):  
Lijana Stabingienė ◽  
Giedrius Stabingis ◽  
Kęstutis Dučinskas

In image classification often occur such situations, when images in some level are corrupted by additive noise. Such noise in image classification can be modeled by Gaussian random fields (GRF). In image classification supervised and unsupervised methods are used. In this paper we compare our proposed supervised classification methods based on plugin Bayes discriminant functions (PBDF) (see [6] and [11]) with unsupervised classification method based on grey level co-occurrence matrix (GLCM) (see e.g. [8] and [1]). The remotely sensed image is used for classification (USGS Earth Explorer). Also GRF with different spatial correlation range are generated and added to the original remotely sensed image. Such situation can naturally occur during forest fire, when smoke covers some territory. These images are used for classification accuracy examination.  


2017 ◽  
Vol 11 (3) ◽  
pp. 036004 ◽  
Author(s):  
Eduarda Martiniano de Oliveira Silveira ◽  
Michele Duarte de Menezes ◽  
Fausto Weimar Acerbi Júnior ◽  
Marcela Castro Nunes Santos Terra ◽  
José Márcio de Mello

2021 ◽  
Vol 13 (23) ◽  
pp. 4896
Author(s):  
Kambiz Borna ◽  
Antoni B. Moore ◽  
Azadeh Noori Hoshyar ◽  
Pascal Sirguey

Unsupervised image classification methods conventionally use the spatial information of pixels to reduce the effect of speckled noise in the classified map. To extract this spatial information, they employ a predefined geometry, i.e., a fixed-size window or segmentation map. However, this coding of geometry lacks the necessary complexity to accurately reflect the spatial connectivity within objects in a scene. Additionally, there is no unique mathematical formula to determine the shape and scale applied to the geometry, being parameters that are usually estimated by expert users. In this paper, a novel geometry-led approach using Vector Agents (VAs) is proposed to address the above drawbacks in unsupervised classification algorithms. Our proposed method has two primary steps: (1) creating reliable training samples and (2) constructing the VA model. In the first step, the method applies the statistical information of a classified image by k-means to select a set of reliable training samples. Then, in the second step, the VAs are trained and constructed to classify the image. The model is tested for classification on three high spatial resolution images. The results show the enhanced capability of the VA model to reduce noise in images that have complex features, e.g., streets, buildings.


Sign in / Sign up

Export Citation Format

Share Document