scholarly journals Generation and evaluation of an ensemble of wildland fire simulations

2020 ◽  
Vol 29 (2) ◽  
pp. 160 ◽  
Author(s):  
Frédéric Allaire ◽  
Jean-Baptiste Filippi ◽  
Vivien Mallet

Numerical simulations of wildfire spread can provide support in deciding firefighting actions but their predictive performance is challenged by the uncertainty of model inputs stemming from weather forecasts, fuel parameterisation and other fire characteristics. In this study, we assign probability distributions to the inputs and propagate the uncertainty by running hundreds of Monte Carlo simulations. The ensemble of simulations is summarised via a burn probability map whose evaluation based on the corresponding observed burned surface is not obvious. We define several properties and introduce probabilistic scores that are common in meteorological applications. Based on these elements, we evaluate the predictive performance of our ensembles for seven fires that occurred in Corsica from mid-2017 to early 2018. We obtain fair performance in some of the cases but accuracy and reliability of the forecasts can be improved. The ensemble generation can be accomplished in a reasonable amount of time and could be used in an operational context provided that sufficient computational resources are available. The proposed probabilistic scores are also appropriate in a calibration process to improve the ensembles.

Atmosphere ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 552
Author(s):  
Bu-Yo Kim ◽  
Joo Wan Cha ◽  
Ki-Ho Chang ◽  
Chulkyu Lee

In this study, the visibility of South Korea was predicted (VISRF) using a random forest (RF) model based on ground observation data from the Automated Synoptic Observing System (ASOS) and air pollutant data from the European Centre for Medium-Range Weather Forecasts (ECMWF) Copernicus Atmosphere Monitoring Service (CAMS) model. Visibility was predicted and evaluated using a training set for the period 2017–2018 and a test set for 2019. VISRF results were compared and analyzed using visibility data from the ASOS (VISASOS) and the Unified Model (UM) Local Data Assimilation and Prediction System (LDAPS) (VISLDAPS) operated by the Korea Meteorological Administration (KMA). Bias, root mean square error (RMSE), and correlation coefficients (R) for the VISASOS and VISLDAPS datasets were 3.67 km, 6.12 km, and 0.36, respectively, compared to 0.14 km, 2.84 km, and 0.81, respectively, for the VISASOS and VISRF datasets. Based on these comparisons, the applied RF model offers significantly better predictive performance and more accurate visibility data (VISRF) than the currently available VISLDAPS outputs. This modeling approach can be implemented by authorities to accurately estimate visibility and thereby reduce accidents, risks to public health, and economic losses, as well as inform on urban development policies and environmental regulations.


2016 ◽  
Vol 46 (2) ◽  
pp. 234-248 ◽  
Author(s):  
Erin J. Belval ◽  
Yu Wei ◽  
Michael Bevers

Wildfire behavior is a complex and stochastic phenomenon that can present unique tactical management challenges. This paper investigates a multistage stochastic mixed integer program with full recourse to model spatially explicit fire behavior and to select suppression locations for a wildland fire. Simplified suppression decisions take the form of “suppression nodes”, which are placed on a raster landscape for multiple decision stages. Weather scenarios are used to represent a distribution of probable changes in fire behavior in response to random weather changes, modeled using probabilistic weather trees. Multistage suppression decisions and fire behavior respond to these weather events and to each other. Nonanticipativity constraints ensure that suppression decisions account for uncertainty in weather forecasts. Test cases for this model provide examples of fire behavior interacting with suppression to achieve a minimum expected area impacted by fire and suppression.


2014 ◽  
Vol 41 (24) ◽  
pp. 9197-9205 ◽  
Author(s):  
S. Hemri ◽  
M. Scheuerer ◽  
F. Pappenberger ◽  
K. Bogner ◽  
T. Haiden

2020 ◽  
Vol 109 (11) ◽  
pp. 2121-2139
Author(s):  
Aljaž Osojnik ◽  
Panče Panov ◽  
Sašo Džeroski

Abstract In many application settings, labeling data examples is a costly endeavor, while unlabeled examples are abundant and cheap to produce. Labeling examples can be particularly problematic in an online setting, where there can be arbitrarily many examples that arrive at high frequencies. It is also problematic when we need to predict complex values (e.g., multiple real values), a task that has started receiving considerable attention, but mostly in the batch setting. In this paper, we propose a method for online semi-supervised multi-target regression. It is based on incremental trees for multi-target regression and the predictive clustering framework. Furthermore, it utilizes unlabeled examples to improve its predictive performance as compared to using just the labeled examples. We compare the proposed iSOUP-PCT method with supervised tree methods, which do not use unlabeled examples, and to an oracle method, which uses unlabeled examples as though they were labeled. Additionally, we compare the proposed method to the available state-of-the-art methods. The method achieves good predictive performance on account of increased consumption of computational resources as compared to its supervised variant. The proposed method also beats the state-of-the-art in the case of very few labeled examples in terms of performance, while achieving comparable performance when the labeled examples are more common.


2016 ◽  
Vol 31 (1) ◽  
pp. 255-271 ◽  
Author(s):  
Ryan A. Sobash ◽  
Craig S. Schwartz ◽  
Glen S. Romine ◽  
Kathryn R. Fossell ◽  
Morris L. Weisman

Abstract Probabilistic severe weather forecasts for days 1 and 2 were produced using 30-member convection-allowing ensemble forecasts initialized by an ensemble Kalman filter data assimilation system during a 32-day period coinciding with the Mesoscale Predictability Experiment. The forecasts were generated by smoothing the locations where model output indicated extreme values of updraft helicity, a surrogate for rotating thunderstorms in model output. The day 1 surrogate severe probability forecasts (SSPFs) produced skillful and reliable predictions of severe weather during this period, after an appropriate calibration of the smoothing kernel. The ensemble SSPFs exceeded the skill of SSPFs derived from two benchmark deterministic forecasts, with the largest differences occurring on the mesoscale, while all SSPFs produced similar forecasts on synoptic scales. While the deterministic SSPFs often overforecasted high probabilities, the ensemble improved the reliability of these probabilities, at the expense of producing fewer high-probability values. For the day 2 period, the SSPFs provided competitive guidance compared to the day 1 forecasts, although additional smoothing was needed to produce the same level of skill, reducing the forecast sharpness. Results were similar using 10 ensemble members, suggesting value exists when running a smaller ensemble if computational resources are limited. Finally, the SSPFs were compared to severe weather risk areas identified in Storm Prediction Center (SPC) convective outlooks. The SSPF skill was comparable to the SPC outlook skill in identifying regions where severe weather would occur, although performance varied on a day-to-day basis.


Author(s):  
David Schoenach ◽  
Thorsten Simon ◽  
Georg Johann Mayr

Abstract. Weather forecasts from ensemble prediction systems (EPS) are improved by statistical models trained on past EPS forecasts and their atmospheric observations. Recently these corrections have moved from being univariate to multivariate. The focus has been on (quasi-)horizontal atmospheric variables. This paper extends the correction methods to EPS forecasts of vertical profiles in two steps. First univariate distributional regression methods correct the probability distributions separately at each vertical level. In the second step copula coupling re-installs the dependence among neighboring levels by using the rank order structure of the EPS forecasts. The method is applied to EPS data from the European Centre for Medium-Range Weather Forecasts (ECMWF) at model levels interpolated to four locations in Germany, from which radiosondes are released to measure profiles of temperature and other variables four times a day. A winter case study and a summer case study, respectively, exemplify that univariate postprocessing fails to preserve stable layers, which are crucial for many atmospheric processes. Quantile resampling and a resampling that preserves the relative distance between individual EPS members improve the calibration of the raw forecasts of the temperature profiles as shown by rank histograms. They also improve the multivariate metrics of energy score and variogram score and retain the stable layers. Improvements take place over all times of the day and all seasons. They are largest within the atmospheric boundary layer and for shorter lead times.


2021 ◽  
Vol 21 (12) ◽  
Author(s):  
Akihiro Nomura ◽  
Masahiro Noguchi ◽  
Mitsuhiro Kometani ◽  
Kenji Furukawa ◽  
Takashi Yoneda

Abstract Purpose of Review Artificial intelligence (AI) can make advanced inferences based on a large amount of data. The mainstream technologies of the AI boom in 2021 are machine learning (ML) and deep learning, which have made significant progress due to the increase in computational resources accompanied by the dramatic improvement in computer performance. In this review, we introduce AI/ML-based medical devices and prediction models regarding diabetes. Recent Findings In the field of diabetes, several AI-/ML-based medical devices and regarding automatic retinal screening, clinical diagnosis support, and patient self-management tool have already been approved by the US Food and Drug Administration. As for new-onset diabetes prediction using ML methods, its performance is not superior to conventional risk stratification models that use statistical approaches so far. Summary Despite the current situation, it is expected that the predictive performance of AI will soon be maximized by a large amount of organized data and abundant computational resources, which will contribute to a dramatic improvement in the accuracy of disease prediction models for diabetes.


2020 ◽  
Author(s):  
Moritz N. Lang ◽  
Sebastian Lerch ◽  
Georg J. Mayr ◽  
Thorsten Simon ◽  
Reto Stauffer ◽  
...  

<p>Non-homogeneous regression is a frequently-used post-processing method for increasing the predictive skill of probabilistic ensemble weather forecasts. To adjust for seasonally varying error characteristics between ensemble forecasts and corresponding observations, different time-adaptive training schemes, including the classical sliding training window, have been developed for non-homogeneous regression. This study compares three such training approaches with the sliding-window approach for the application of post-processing near-surface air temperature forecasts across Central Europe. The predictive performance is evaluated conditional on three different groups of stations located in plains, in mountain foreland, and within mountainous terrain, as well as on a specific change in the ensemble forecast system of the European Centre for Medium-Range Weather Forecasts (ECMWF) used as input for the post-processing.</p><p>The results show that time-adaptive training schemes using data over multiple years stabilize the temporal evolution of the coefficient estimates, yielding an increased predictive performance for all station types tested compared to the classical sliding-window approach based on the most recent days only. While this may not be surprising under fully stable model conditions, it is shown that "remembering the past" from multiple years of training data is typically also superior to the classical sliding-window when the ensemble prediction system is affected by certain model changes. Thus, reducing the variance of the non-homogeneous regression estimates due to increased training data appears to be more important than reducing its bias by adapting rapidly to the most current training data only.</p>


2020 ◽  
Vol 27 (1) ◽  
pp. 23-34 ◽  
Author(s):  
Moritz N. Lang ◽  
Sebastian Lerch ◽  
Georg J. Mayr ◽  
Thorsten Simon ◽  
Reto Stauffer ◽  
...  

Abstract. Non-homogeneous regression is a frequently used post-processing method for increasing the predictive skill of probabilistic ensemble weather forecasts. To adjust for seasonally varying error characteristics between ensemble forecasts and corresponding observations, different time-adaptive training schemes, including the classical sliding training window, have been developed for non-homogeneous regression. This study compares three such training approaches with the sliding-window approach for the application of post-processing near-surface air temperature forecasts across central Europe. The predictive performance is evaluated conditional on three different groups of stations located in plains, in mountain foreland, and within mountainous terrain, as well as on a specific change in the ensemble forecast system of the European Centre for Medium-Range Weather Forecasts (ECMWF) used as input for the post-processing. The results show that time-adaptive training schemes using data over multiple years stabilize the temporal evolution of the coefficient estimates, yielding an increased predictive performance for all station types tested compared to the classical sliding-window approach based on the most recent days only. While this may not be surprising under fully stable model conditions, it is shown that “remembering the past” from multiple years of training data is typically also superior to the classical sliding-window approach when the ensemble prediction system is affected by certain model changes. Thus, reducing the variance of the non-homogeneous regression estimates due to increased training data appears to be more important than reducing its bias by adapting rapidly to the most current training data only.


Sign in / Sign up

Export Citation Format

Share Document