scholarly journals Using long temporal reference units to assess the spatial accuracy of global satellite-derived burned area products

2022 ◽  
Vol 269 ◽  
pp. 112823
Author(s):  
Magí Franquesa ◽  
Joshua Lizundia-Loiola ◽  
Stephen V. Stehman ◽  
Emilio Chuvieco
2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Laia Núñez-Casillas ◽  
José Rafael García Lázaro ◽  
José Andrés Moreno-Ruiz ◽  
Manuel Arbelo

The turn of the new millennium was accompanied by a particularly diverse group of burned area datasets from different sensors in the Canadian boreal forests, brought together in a year of low global fire activity. This paper provides an assessment of spatial and temporal accuracy, by means of a fire-by-fire comparison of the following: two burned area datasets obtained from SPOT-VEGETATION (VGT) imagery, a MODIS Collection 5 burned area dataset, and three different datasets obtained from NOAA-AVHRR. Results showed that burned area data from MODIS provided accurate dates of burn but great omission error, partially caused by calibration problems. One of the VGT-derived datasets (L3JRC) represented the largest number of fire sites in spite of its great overall underestimation, whereas the GBA2000 dataset achieved the best burned area quantification, both showing delayed and very variable fire timing. Spatial accuracy was comparable between the 5 km and the 1 km AVHRR-derived datasets but was remarkably lower in the 8 km dataset leading, us to conclude that at higher spatial resolutions, temporal accuracy was lower. The probable methodological and contextual causes of these differences were analyzed in detail.


Author(s):  
Marc Ouellet ◽  
Julio Santiago ◽  
Ziv Israeli ◽  
Shai Gabay

Spanish and English speakers tend to conceptualize time as running from left to right along a mental line. Previous research suggests that this representational strategy arises from the participants’ exposure to a left-to-right writing system. However, direct evidence supporting this assertion suffers from several limitations and relies only on the visual modality. This study subjected to a direct test the reading hypothesis using an auditory task. Participants from two groups (Spanish and Hebrew) differing in the directionality of their orthographic system had to discriminate temporal reference (past or future) of verbs and adverbs (referring to either past or future) auditorily presented to either the left or right ear by pressing a left or a right key. Spanish participants were faster responding to past words with the left hand and to future words with the right hand, whereas Hebrew participants showed the opposite pattern. Our results demonstrate that the left-right mapping of time is not restricted to the visual modality and that the direction of reading accounts for the preferred directionality of the mental time line. These results are discussed in the context of a possible mechanism underlying the effects of reading direction on highly abstract conceptual representations.


2016 ◽  
Vol 16 (3) ◽  
pp. 643-661 ◽  
Author(s):  
Kostas Kalabokidis ◽  
Alan Ager ◽  
Mark Finney ◽  
Nikos Athanasis ◽  
Palaiologos Palaiologou ◽  
...  

Abstract. We describe a Web-GIS wildfire prevention and management platform (AEGIS) developed as an integrated and easy-to-use decision support tool to manage wildland fire hazards in Greece (http://aegis.aegean.gr). The AEGIS platform assists with early fire warning, fire planning, fire control and coordination of firefighting forces by providing online access to information that is essential for wildfire management. The system uses a number of spatial and non-spatial data sources to support key system functionalities. Land use/land cover maps were produced by combining field inventory data with high-resolution multispectral satellite images (RapidEye). These data support wildfire simulation tools that allow the users to examine potential fire behavior and hazard with the Minimum Travel Time fire spread algorithm. End-users provide a minimum number of inputs such as fire duration, ignition point and weather information to conduct a fire simulation. AEGIS offers three types of simulations, i.e., single-fire propagation, point-scale calculation of potential fire behavior, and burn probability analysis, similar to the FlamMap fire behavior modeling software. Artificial neural networks (ANNs) were utilized for wildfire ignition risk assessment based on various parameters, training methods, activation functions, pre-processing methods and network structures. The combination of ANNs and expected burned area maps are used to generate integrated output map of fire hazard prediction. The system also incorporates weather information obtained from remote automatic weather stations and weather forecast maps. The system and associated computation algorithms leverage parallel processing techniques (i.e., High Performance Computing and Cloud Computing) that ensure computational power required for real-time application. All AEGIS functionalities are accessible to authorized end-users through a web-based graphical user interface. An innovative smartphone application, AEGIS App, also provides mobile access to the web-based version of the system.


Land ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 679
Author(s):  
Avi Bar-Massada

The Wildland Urban Interface (WUI) is where human settlements border or intermingle with undeveloped land, often with multiple detrimental consequences. Therefore, mapping the WUI is required in order to identify areas-at-risk. There are two main WUI mapping methods, the point-based approach and the zonal approach. Both differ in data requirements and may produce considerably different maps, yet they were never compared before. My objective was to systematically compare the point-based and the zonal-based WUI maps of California, and to test the efficacy of a new database of building locations in the context of WUI mapping. I assessed the spatial accuracy of the building database, and then compared the spatial patterns of WUI maps by estimating the effect of multiple ancillary variables on the amount of agreement between maps. I found that the building database is highly accurate and is suitable for WUI mapping. The point-based approach estimated a consistently larger WUI area across California compared to the zonal approach. The spatial correspondence between maps was low-to-moderate, and was significantly affected by building numbers and by their spatial arrangement. The discrepancy between WUI maps suggests that they are not directly comparable within and across landscapes, and that each WUI map should serve a distinct practical purpose.


2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.


2021 ◽  
Vol 260 ◽  
pp. 112468
Author(s):  
Miguel A. Belenguer-Plomer ◽  
Mihai A. Tanase ◽  
Emilio Chuvieco ◽  
Francesca Bovolo

Forests ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 880
Author(s):  
Andrey Sirin ◽  
Alexander Maslov ◽  
Dmitry Makarov ◽  
Yakov Gulbe ◽  
Hans Joosten

Forest-peat fires are notable for their difficulty in estimating carbon losses. Combined carbon losses from tree biomass and peat soil were estimated at an 8 ha forest-peat fire in the Moscow region after catastrophic fires in 2010. The loss of tree biomass carbon was assessed by reconstructing forest stand structure using the classification of pre-fire high-resolution satellite imagery and after-fire ground survey of the same forest classes in adjacent areas. Soil carbon loss was assessed by using the root collars of stumps to reconstruct the pre-fire soil surface and interpolating the peat characteristics of adjacent non-burned areas. The mean (median) depth of peat losses across the burned area was 15 ± 8 (14) cm, varying from 13 ± 5 (11) to 20 ± 9 (19). Loss of soil carbon was 9.22 ± 3.75–11.0 ± 4.96 (mean) and 8.0–11.0 kg m−2 (median); values exceeding 100 tC ha−1 have also been found in other studies. The estimated soil carbon loss for the entire burned area, 98 (mean) and 92 (median) tC ha−1, significantly exceeds the carbon loss from live (tree) biomass, which averaged 58.8 tC ha−1. The loss of carbon in the forest-peat fire thus equals the release of nearly 400 (soil) and, including the biomass, almost 650 tCO2 ha−1 into the atmosphere, which illustrates the underestimated impact of boreal forest-peat fires on atmospheric gas concentrations and climate.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2407
Author(s):  
Hojun You ◽  
Dongsu Kim

Fluvial remote sensing has been used to monitor diverse riverine properties through processes such as river bathymetry and visual detection of suspended sediment, algal blooms, and bed materials more efficiently than laborious and expensive in-situ measurements. Red–green–blue (RGB) optical sensors have been widely used in traditional fluvial remote sensing. However, owing to their three confined bands, they rely on visual inspection for qualitative assessments and are limited to performing quantitative and accurate monitoring. Recent advances in hyperspectral imaging in the fluvial domain have enabled hyperspectral images to be geared with more than 150 spectral bands. Thus, various riverine properties can be quantitatively characterized using sensors in low-altitude unmanned aerial vehicles (UAVs) with a high spatial resolution. Many efforts are ongoing to take full advantage of hyperspectral band information in fluvial research. Although geo-referenced hyperspectral images can be acquired for satellites and manned airplanes, few attempts have been made using UAVs. This is mainly because the synthesis of line-scanned images on top of image registration using UAVs is more difficult owing to the highly sensitive and heavy image driven by dense spatial resolution. Therefore, in this study, we propose a practical technique for achieving high spatial accuracy in UAV-based fluvial hyperspectral imaging through efficient image registration using an optical flow algorithm. Template matching algorithms are the most common image registration technique in RGB-based remote sensing; however, they require many calculations and can be error-prone depending on the user, as decisions regarding various parameters are required. Furthermore, the spatial accuracy of this technique needs to be verified, as it has not been widely applied to hyperspectral imagery. The proposed technique resulted in an average reduction of spatial errors by 91.9%, compared to the case where the image registration technique was not applied, and by 78.7% compared to template matching.


Sign in / Sign up

Export Citation Format

Share Document