scholarly journals Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning

2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.

2020 ◽  
Vol 12 (15) ◽  
pp. 2422
Author(s):  
Lisa Knopp ◽  
Marc Wieland ◽  
Michaela Rättich ◽  
Sandro Martinis

Wildfires have major ecological, social and economic consequences. Information about the extent of burned areas is essential to assess these consequences and can be derived from remote sensing data. Over the last years, several methods have been developed to segment burned areas with satellite imagery. However, these methods mostly require extensive preprocessing, while deep learning techniques—which have successfully been applied to other segmentation tasks—have yet to be fully explored. In this work, we combine sensor-specific and methodological developments from the past few years and suggest an automatic processing chain, based on deep learning, for burned area segmentation using mono-temporal Sentinel-2 imagery. In particular, we created a new training and validation dataset, which is used to train a convolutional neural network based on a U-Net architecture. We performed several tests on the input data and reached optimal network performance using the spectral bands of the visual, near infrared and shortwave infrared domains. The final segmentation model achieved an overall accuracy of 0.98 and a kappa coefficient of 0.94.


2021 ◽  
Author(s):  
Kim-Anh Nguyen ◽  
Yuei-An Liou ◽  
Le-Thu Ho

<p>Bushfire is one of the dangerous natural manmade hazards. It can cause great damges to the air quality, human health, environment and bio-diversity. In addition, forest fires may be a potential and signigicant source of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans. In early 2020, Australia experienced serious bushfires with over an area of estimated 18.6 million hectares burned, over 5,900 buidlings (including 2, 779 homes) destroyed, and at least 34 people (including three fire fighters) and billion animals and some endangered species killed. Subsequently, air quality was degraded to hazardous levels. It was estimated that about 360 million tonnes of CO<sub>2</sub> was emitted as of 2 Jan. 2020 by NASA. Remote sensing data has been instrumental for the environmental monitoring in particular the bushfire. Many methods and algorithms have been proposed to detect the burned areas in the forest. However, it is challenging or even infeasible to routinely apply them by non-experts due to a chain of sophisticated schemes during their implementation. Here, we present a simple and effective method for mapping a burned area. The performances of different optical sensors and indices are conducted. Sentinel-2 MSI and Landsat 8 data are ultilized for the comparison of burned forest by analyzing different indices (including NDVI, NDBR and newly development index Nomarlized Difference Laten Heat Index (NDLI)). The forest damages are estimated over the Katoombar, Austrialia and the burning severity map is generated and classified into eight levels (none, high regrowth, lowregrowth, unburned, low severity, moderate low severity, moderate high severity, and high severity). The comparision in results from Sentinel-2 MSI data and Landsat image is performed and presented.</p>


2021 ◽  
Vol 13 (5) ◽  
pp. 992
Author(s):  
Dan López-Puigdollers ◽  
Gonzalo Mateo-García ◽  
Luis Gómez-Chova

The systematic monitoring of the Earth using optical satellites is limited by the presence of clouds. Accurately detecting these clouds is necessary to exploit satellite image archives in remote sensing applications. Despite many developments, cloud detection remains an unsolved problem with room for improvement, especially over bright surfaces and thin clouds. Recently, advances in cloud masking using deep learning have shown significant boosts in cloud detection accuracy. However, these works are validated in heterogeneous manners, and the comparison with operational threshold-based schemes is not consistent among many of them. In this work, we systematically compare deep learning models trained on Landsat-8 images on different Landsat-8 and Sentinel-2 publicly available datasets. Overall, we show that deep learning models exhibit a high detection accuracy when trained and tested on independent images from the same Landsat-8 dataset (intra-dataset validation), outperforming operational algorithms. However, the performance of deep learning models is similar to operational threshold-based ones when they are tested on different datasets of Landsat-8 images (inter-dataset validation) or datasets from a different sensor with similar radiometric characteristics such as Sentinel-2 (cross-sensor validation). The results suggest that (i) the development of cloud detection methods for new satellites can be based on deep learning models trained on data from similar sensors and (ii) there is a strong dependence of deep learning models on the dataset used for training and testing, which highlights the necessity of standardized datasets and procedures for benchmarking cloud detection models in the future.


2021 ◽  
Vol 13 (11) ◽  
pp. 2220
Author(s):  
Yanbing Bai ◽  
Wenqi Wu ◽  
Zhengxin Yang ◽  
Jinze Yu ◽  
Bo Zhao ◽  
...  

Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability.


2021 ◽  
Vol 54 (1) ◽  
pp. 182-208
Author(s):  
Sani M. Isa ◽  
Suharjito ◽  
Gede Putera Kusuma ◽  
Tjeng Wawan Cenggoro
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3982
Author(s):  
Giacomo Lazzeri ◽  
William Frodella ◽  
Guglielmo Rossi ◽  
Sandro Moretti

Wildfires have affected global forests and the Mediterranean area with increasing recurrency and intensity in the last years, with climate change resulting in reduced precipitations and higher temperatures. To assess the impact of wildfires on the environment, burned area mapping has become progressively more relevant. Initially carried out via field sketches, the advent of satellite remote sensing opened new possibilities, reducing the cost uncertainty and safety of the previous techniques. In the present study an experimental methodology was adopted to test the potential of advanced remote sensing techniques such as multispectral Sentinel-2, PRISMA hyperspectral satellite, and UAV (unmanned aerial vehicle) remotely-sensed data for the multitemporal mapping of burned areas by soil–vegetation recovery analysis in two test sites in Portugal and Italy. In case study one, innovative multiplatform data classification was performed with the correlation between Sentinel-2 RBR (relativized burn ratio) fire severity classes and the scene hyperspectral signature, performed with a pixel-by-pixel comparison leading to a converging classification. In the adopted methodology, RBR burned area analysis and vegetation recovery was tested for accordance with biophysical vegetation parameters (LAI, fCover, and fAPAR). In case study two, a UAV-sensed NDVI index was adopted for high-resolution mapping data collection. At a large scale, the Sentinel-2 RBR index proved to be efficient for burned area analysis, from both fire severity and vegetation recovery phenomena perspectives. Despite the elapsed time between the event and the acquisition, PRISMA hyperspectral converging classification based on Sentinel-2 was able to detect and discriminate different spectral signatures corresponding to different fire severity classes. At a slope scale, the UAV platform proved to be an effective tool for mapping and characterizing the burned area, giving clear advantage with respect to filed GPS mapping. Results highlighted that UAV platforms, if equipped with a hyperspectral sensor and used in a synergistic approach with PRISMA, would create a useful tool for satellite acquired data scene classification, allowing for the acquisition of a ground truth.


2018 ◽  
Vol 10 (10) ◽  
pp. 1572 ◽  
Author(s):  
Chunping Qiu ◽  
Michael Schmitt ◽  
Lichao Mou ◽  
Pedram Ghamisi ◽  
Xiao Zhu

Global Local Climate Zone (LCZ) maps, indicating urban structures and land use, are crucial for Urban Heat Island (UHI) studies and also as starting points to better understand the spatio-temporal dynamics of cities worldwide. However, reliable LCZ maps are not available on a global scale, hindering scientific progress across a range of disciplines that study the functionality of sustainable cities. As a first step towards large-scale LCZ mapping, this paper tries to provide guidance about data/feature choice. To this end, we evaluate the spectral reflectance and spectral indices of the globally available Sentinel-2 and Landsat-8 imagery, as well as the Global Urban Footprint (GUF) dataset, the OpenStreetMap layers buildings and land use and the Visible Infrared Imager Radiometer Suite (VIIRS)-based Nighttime Light (NTL) data, regarding their relevance for discriminating different Local Climate Zones (LCZs). Using a Residual convolutional neural Network (ResNet), a systematic analysis of feature importance is performed with a manually-labeled dataset containing nine cities located in Europe. Based on the investigation of the data and feature choice, we propose a framework to fully exploit the available datasets. The results show that GUF, OSM and NTL can contribute to the classification accuracy of some LCZs with relatively few samples, and it is suggested that Landsat-8 and Sentinel-2 spectral reflectances should be jointly used, for example in a majority voting manner, as proven by the improvement from the proposed framework, for large-scale LCZ mapping.


PLoS ONE ◽  
2020 ◽  
Vol 15 (5) ◽  
pp. e0232962 ◽  
Author(s):  
Fiona Ngadze ◽  
Kudzai Shaun Mpakairi ◽  
Blessing Kavhu ◽  
Henry Ndaimani ◽  
Monalisa Shingirayi Maremba

Sign in / Sign up

Export Citation Format

Share Document