scholarly journals Towards a Deep-Learning-Based Framework of Sentinel-2 Imagery for Automated Active Fire Detection

2021 ◽  
Vol 13 (23) ◽  
pp. 4790
Author(s):  
Qi Zhang ◽  
Linlin Ge ◽  
Ruiheng Zhang ◽  
Graciela Isabel Metternicht ◽  
Chang Liu ◽  
...  

This paper proposes an automated active fire detection framework using Sentinel-2 imagery. The framework is made up of three basic parts including data collection and preprocessing, deep-learning-based active fire detection, and final product generation modules. The active fire detection module is developed on a specifically designed dual-domain channel-position attention (DCPA)+HRNetV2 model and a dataset with semi-manually annotated active fire samples is constructed over wildfires that commenced on the east coast of Australia and the west coast of the United States in 2019–2020 for the training process. This dataset can be used as a benchmark for other deep-learning-based algorithms to improve active fire detection accuracy. The performance of active fire detection is evaluated regarding the detection accuracy of deep-learning-based models and the processing efficiency of the whole framework. Results indicate that the DCPA and HRNetV2 combination surpasses DeepLabV3 and HRNetV2 models for active fire detection. In addition, the automated framework can deliver active fire detection results of Sentinel-2 inputs with coverage of about 12,000 km2 (including data download) in less than 6 min, where average intersections over union (IoUs) of 70.4% and 71.9% were achieved in tests over Australia and the United States, respectively. Concepts in this framework can be further applied to other remote sensing sensors with data acquisitions in SWIR-NIR-Red ranges and can serve as a powerful tool to deal with large volumes of high-resolution data used in future fire monitoring systems and as a cost-efficient resource in support of governments and fire service agencies that need timely, optimized firefighting plans.

2021 ◽  
Vol 9 ◽  
Author(s):  
Joshua J. Levy ◽  
Rebecca M. Lebeaux ◽  
Anne G. Hoen ◽  
Brock C. Christensen ◽  
Louis J. Vaickus ◽  
...  

What is the relationship between mortality and satellite images as elucidated through the use of Convolutional Neural Networks?Background: Following a century of increase, life expectancy in the United States has stagnated and begun to decline in recent decades. Using satellite images and street view images, prior work has demonstrated associations of the built environment with income, education, access to care, and health factors such as obesity. However, assessment of learned image feature relationships with variation in crude mortality rate across the United States has been lacking.Objective: We sought to investigate if county-level mortality rates in the U.S. could be predicted from satellite images.Methods: Satellite images of neighborhoods surrounding schools were extracted with the Google Static Maps application programming interface for 430 counties representing ~68.9% of the US population. A convolutional neural network was trained using crude mortality rates for each county in 2015 to predict mortality. Learned image features were interpreted using Shapley Additive Feature Explanations, clustered, and compared to mortality and its associated covariate predictors.Results: Predicted mortality from satellite images in a held-out test set of counties was strongly correlated to the true crude mortality rate (Pearson r = 0.72). Direct prediction of mortality using a deep learning model across a cross-section of 430 U.S. counties identified key features in the environment (e.g., sidewalks, driveways, and hiking trails) associated with lower mortality. Learned image features were clustered, and we identified 10 clusters that were associated with education, income, geographical region, race, and age.Conclusions: The application of deep learning techniques to remotely-sensed features of the built environment can serve as a useful predictor of mortality in the United States. Although we identified features that were largely associated with demographic information, future modeling approaches that directly identify image features associated with health-related outcomes have the potential to inform targeted public health interventions.


2021 ◽  
Author(s):  
Shuren Chou

<p>Deep learning has a good capacity of hierarchical feature learning from unlabeled remote sensing images. In this study, the simple linear iterative clustering (SLIC) method was improved to segment the image into good quality super-pixels. Then, we used the convolutional neural network (CNN) to extract of water bodies from Sentinel-2 MSI data using deep learning technique. In the proposed framework, the improved SLIC method obtained the correct water bodies boundary by optimizing the initial clustering center, designing a dynamic distance measure, and expanding the search space. In addition, it is different from traditional extraction of water bodies methods that cannot achieve multi-level water bodies detection. Experimental results showed that this method had higher detection accuracy and robustness than other methods. This study was able to extract water bodies from remotely sensed images with deep learning and to conduct accuracy assessment.</p>


2019 ◽  
Vol 11 (17) ◽  
pp. 2000 ◽  
Author(s):  
Liming He ◽  
Georgy Mostovoy

High-resolution data with nearly global coverage from Sentinel-2 mission open a new era for crop growth monitoring and yield estimation from remote sensing. The objective of this study is to demonstrate the potential of using Sentinel-2 biophysical data combined with an ecosystem modeling approach for estimation of cotton yield in the southern United States (US). The Boreal Ecosystems Productivity Simulator (BEPS) ecosystem model was used to simulate the cotton gross primary production (GPP) over three Sentinel-2 tiles located in Mississippi, Georgia, and Texas in 2017. Leaf area index (LAI) derived from Sentinel-2 measurements and hourly meteorological data from Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2) reanalysis were used to drive the ecosystem model. The simulated GPP values at 20-m grid spacing were aggregated to the county level (17 counties in total) and compared to the cotton lint yield estimates at the county level which are available from National Agricultural Statistics Service in the United States Department of Agriculture. The results of the comparison show that the BEPS-simulated cotton GPP explains 85% of variation in cotton yield. Our study suggests that the integration of Sentinel-2 LAI time series into the ecosystem model results in reliable estimates of cotton yield.


2020 ◽  
Author(s):  
Eileen Rintsch ◽  
Jessica L. McCarty

<p>Crop residue and rangeland burning is a common practice in the United States but verified ground-based estimates for the frequency of these fires is sparse. We present a comparison between known fire locations collected during the summer 2019 NOAA/NASA FIREX-AQ field campaign with several satellite-based active fire detections to estimate the occurrence of small-scale fires in agroecosystems. Many emissions inventories at the state-, country-, and global-level are driven by active fire detections and not burned area estimates for small fires in agroecosystems. The study area is focused on the southern Great Plains and Mississippi Delta of the United States. We combined fire occurrence data from 375 m Visible Infrared Imaging Spectrometer (VIIRS), 1 km Moderate Resolution Imaging Spectroradiometer (MODIS), and 2 km Geostationary Operational Environmental Satellite (GOES) active fires with 30 m land use data from U.S. Department of Agriculture Cropland Data Layer (CDL). The detections were compared to fires and land use validated in the field during the NOAA/NASA FIREX-AQ mission. GOES detected these fires at a higher frequency than MODIS or VIIRS. For example, MODIS detected 873 active fires and VIIRS detected 2,859, while GOES detected 13,634 active fires. Additionally, a large amount of the fires documented in the field, approximately 41%, were not detected by any satellite instrument used in the study. If GOES detections are excluded, approximately 5% of the documented fires were detected. This suggests that a large amount of cropland and rangeland burning are not detected by current active fire products from polar orbiting satellites like MODIS and VIIRS, with implications for regional air pollution monitoring, emissions inventories, and climate impacts of open burning.  </p>


2020 ◽  
Author(s):  
Joshua J. Levy ◽  
Rebecca M. Lebeaux ◽  
Anne G. Hoen ◽  
Brock C. Christensen ◽  
Louis J. Vaickus ◽  
...  

AbstractWhat is the relationship between mortality and satellite images as elucidated through the use of Convolutional Neural Networks?BackgroundFollowing a century of increase, life expectancy in the United States has stagnated and begun to decline in recent decades. Using satellite images and street view images, prior work has demonstrated associations of the built environment with income, education, access to care and health factors such as obesity. However, assessment of learned image feature relationships with variation in crude mortality rate across the United States has been lacking. We sought to investigate if county-level mortality rates in the U.S. could be predicted from satellite images.MethodsSatellite images were extracted with the Google Static Maps application programming interface for 430 counties representing approximately 68.9% of the US population. A convolutional neural network was trained using crude mortality rates for each county in 2015 to predict mortality. Learned image features were interpreted using Shapley Additive Feature Explanations, clustered, and compared to mortality and its associated covariate predictors.ResultsPredicted mortality from satellite images in a held-out test set of counties was strongly correlated to the true crude mortality rate (Pearson r=0.72). Learned image features were clustered, and we identified 10 clusters that were associated with education, income, geographical region, race and age. Direct prediction of mortality using a deep learning model across a cross-section of 430 U.S. counties identified key features in the environment (e.g. sidewalks, driveways and hiking trails) associated with lower mortality.ConclusionsThe application of deep learning techniques to remotely-sensed features of the built environment can serve as a useful predictor of mortality in the United States. Although we identified features that were largely associated with demographic information, future modeling approaches that directly identify image features associated with health-related outcomes have the potential to inform targeted public health interventions.


Author(s):  
Domenico Antonio Giuseppe Dell'Aglio ◽  
Carmine Gambardella ◽  
Massimiliano Gargiulo ◽  
Antonio Iodice ◽  
Rosaria Parente ◽  
...  

Forest fires are part of a set of natural disasters that have always affected regions of the world typically characterized by a tropical climate with long periods of drought. However, due to climate change in recent years, other regions of our planet have also been affected by this phenomenon, never seen before. One of them is certainly the Italian peninsula, and especially the regions of southern Italy. For this reason, the scientific community, as well as remote sensing one, is highly concerned in developing reliable techniques to provide useful support to the competent authorities. In particular, three specific tasks have been carried out in this work: (i) fire risk prevention, (ii) active fire detection, and (iii) post-fire area assessment. To accomplish these analyses, the capability of a set of spectral indices, derived from spaceborne remote sensing (RS) data, is assessed to monitor the forest fires. The spectral indices are obtained from Sentinel-2 multispectral images of the European Space Agency (ESA), which are free of charge and openly accessible. Moreover, the twin Sentinel-2 sensors allow to overcome some restrictions on time delivery and observation repeat time. The performance of the proposed analyses were assessed experimentally to monitor the forest fires occurred in two specific study areas during the summer of 2017: the volcano Vesuvius, near Naples, and the Lattari mountains, near Sorrento (both in Campania, Italy).


2019 ◽  
Author(s):  
S. B. Choi ◽  
J. Kim ◽  
I. Ahn

AbstractTo identify countries that have seasonal patterns similar to the time series of influenza surveillance data in the United States and other countries, and to forecast the 2018–2019 seasonal influenza outbreak in the U.S. using linear regression, auto regressive integrated moving average, and deep learning. We collected the surveillance data of 164 countries from 2010 to 2018 using the FluNet database. Data for influenza-like illness (ILI) in the U.S. were collected from the Fluview database. This cross-correlation study identified the time lag between the two time-series. Deep learning was performed to forecast ILI, total influenza, A, and B viruses after 26 weeks in the U.S. The seasonal influenza patterns in Australia and Chile showed a high correlation with those of the U.S. 22 weeks and 28 weeks earlier, respectively. The R2 score of DNN models for ILI for validation set in 2015–2019 was 0.722 despite how hard it is to forecast 26 weeks ahead. Our prediction models forecast that the ILI for the U.S. in 2018–2019 may be later and less severe than those in 2017–2018, judging from the influenza activity for Australia and Chile in 2018. It allows to estimate peak timing, peak intensity, and type-specific influenza activities for next season at 40th week. The correlation for seasonal influenza among Australia, Chile, and the U.S. could be used to decide on influenza vaccine strategy six months ahead in the U.S.


Subject Prospect for artificial intelligence applications. Significance Artificial intelligence (AI) technologies, particularly those using 'deep learning', have in the past five years helped to automate many tasks previously outside the capabilities of computers. There are signs that the feverish pace of progress seen recently is slowing. Impacts Western legislation will make companies responsible for preventing decisions based on biased AI. Advances in 'explainable AI' will be rapid. China will be a major research player in AI technologies, alongside the United States, Japan and Europe.


Sign in / Sign up

Export Citation Format

Share Document