scholarly journals Predicting Fire Propagation across Heterogeneous Landscapes Using WyoFire: A Monte Carlo-Driven Wildfire Model

Fire ◽  
2020 ◽  
Vol 3 (4) ◽  
pp. 71
Author(s):  
Cory W. Ott ◽  
Bishrant Adhikari ◽  
Simon P. Alexander ◽  
Paddington Hodza ◽  
Chen Xu ◽  
...  

The scope of wildfires over the previous decade has brought these natural hazards to the forefront of risk management. Wildfires threaten human health, safety, and property, and there is a need for comprehensive and readily usable wildfire simulation platforms that can be applied effectively by wildfire experts to help preserve physical infrastructure, biodiversity, and landscape integrity. Evaluating such platforms is important, particularly in determining the platforms’ reliability in forecasting the spatiotemporal trajectories of wildfire events. This study evaluated the predictive performance of a wildfire simulation platform that implements a Monte Carlo-based wildfire model called WyoFire. WyoFire was used to predict the growth of 10 wildfires that occurred in Wyoming, USA, in 2017 and 2019. The predictive quality of this model was determined by comparing disagreement and agreement areas between the observed and simulated wildfire boundaries. Overestimation–underestimation was greatest in grassland fires (>32) and lowest in mixed-forest, woodland, and shrub-steppe fires (<−2.5). Spatial and statistical analyses of observed and predicted fire perimeters were conducted to measure the accuracy of the predicated outputs. The results indicate that simulations of wildfires that occurred in shrubland- and grassland-dominated environments had the tendency to over-predict, while simulations of fires that took place within forested and woodland-dominated environments displayed the tendency to under-predict.

Algorithms ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 351
Author(s):  
Wilson Tsakane Mongwe ◽  
Rendani Mbuvha ◽  
Tshilidzi Marwala

Markov chain Monte Carlo (MCMC) techniques are usually used to infer model parameters when closed-form inference is not feasible, with one of the simplest MCMC methods being the random walk Metropolis–Hastings (MH) algorithm. The MH algorithm suffers from random walk behaviour, which results in inefficient exploration of the target posterior distribution. This method has been improved upon, with algorithms such as Metropolis Adjusted Langevin Monte Carlo (MALA) and Hamiltonian Monte Carlo being examples of popular modifications to MH. In this work, we revisit the MH algorithm to reduce the autocorrelations in the generated samples without adding significant computational time. We present the: (1) Stochastic Volatility Metropolis–Hastings (SVMH) algorithm, which is based on using a random scaling matrix in the MH algorithm, and (2) Locally Scaled Metropolis–Hastings (LSMH) algorithm, in which the scaled matrix depends on the local geometry of the target distribution. For both these algorithms, the proposal distribution is still Gaussian centred at the current state. The empirical results show that these minor additions to the MH algorithm significantly improve the effective sample rates and predictive performance over the vanilla MH method. The SVMH algorithm produces similar effective sample sizes to the LSMH method, with SVMH outperforming LSMH on an execution time normalised effective sample size basis. The performance of the proposed methods is also compared to the MALA and the current state-of-art method being the No-U-Turn sampler (NUTS). The analysis is performed using a simulation study based on Neal’s funnel and multivariate Gaussian distributions and using real world data modeled using jump diffusion processes and Bayesian logistic regression. Although both MALA and NUTS outperform the proposed algorithms on an effective sample size basis, the SVMH algorithm has similar or better predictive performance when compared to MALA and NUTS across the various targets. In addition, the SVMH algorithm outperforms the other MCMC algorithms on a normalised effective sample size basis on the jump diffusion processes datasets. These results indicate the overall usefulness of the proposed algorithms.


2017 ◽  
Vol 26 (6) ◽  
pp. 485 ◽  
Author(s):  
Kirk W. Davies ◽  
Amanda Gearhart ◽  
Chad S. Boyd ◽  
Jon D. Bates

The interaction between grazing and fire influences ecosystems around the world. However, little is known about the influence of grazing on fire, in particular ignition and initial spread and how it varies by grazing management differences. We investigated effects of fall (autumn) grazing, spring grazing and not grazing on fuel characteristics, fire ignition and initial spread during the wildfire season (July and August) at five shrub steppe sites in Oregon, USA. Both grazing treatments decreased fine fuel biomass, cover and height, and increased fuel moisture, and thereby decreased ignition and initial spread compared with the ungrazed treatment. However, effects differed between fall and spring grazing. The probability of initial spread was 6-fold greater in the fall-grazed compared with the spring-grazed treatment in August. This suggests that spring grazing may have a greater effect on fires than fall grazing, likely because fall grazing does not influence the current year’s plant growth. Results of this study also highlight that the grazing–fire interaction will vary by grazing management. Grazing either the fall or spring before the wildfire season reduces the probability of fire propagation and, thus, grazing is a potential fuel management tool.


2019 ◽  
Vol 21 (5) ◽  
pp. 1509-1522 ◽  
Author(s):  
Akila Katuwawala ◽  
Christopher J Oldfield ◽  
Lukasz Kurgan

Abstract Experimental annotations of intrinsic disorder are available for 0.1% of 147 000 000 of currently sequenced proteins. Over 60 sequence-based disorder predictors were developed to help bridge this gap. Current benchmarks of these methods assess predictive performance on datasets of proteins; however, predictions are often interpreted for individual proteins. We demonstrate that the protein-level predictive performance varies substantially from the dataset-level benchmarks. Thus, we perform first-of-its-kind protein-level assessment for 13 popular disorder predictors using 6200 disorder-annotated proteins. We show that the protein-level distributions are substantially skewed toward high predictive quality while having long tails of poor predictions. Consequently, between 57% and 75% proteins secure higher predictive performance than the currently used dataset-level assessment suggests, but as many as 30% of proteins that are located in the long tails suffer low predictive performance. These proteins typically have relatively high amounts of disorder, in contrast to the mostly structured proteins that are predicted accurately by all 13 methods. Interestingly, each predictor provides the most accurate results for some number of proteins, while the best-performing at the dataset-level method is in fact the best for only about 30% of proteins. Moreover, the majority of proteins are predicted more accurately than the dataset-level performance of the most accurate tool by at least four disorder predictors. While these results suggests that disorder predictors outperform their current benchmark performance for the majority of proteins and that they complement each other, novel tools that accurately identify the hard-to-predict proteins and that make accurate predictions for these proteins are needed.


1985 ◽  
Vol 29 (10) ◽  
pp. 912-915 ◽  
Author(s):  
Cary Robb Jensen

In this paper a method is demonstrated which allows for the comparison of the predictive performance of 2 preference ordering models. This method employs a 3-dimensional graph which is both informative and easy to interpret. This method is demonstrated with a Monte Carlo study designed to compare the predictive performance of a utility preference model using different weighting coefficient schemes.


2019 ◽  
Vol 18 (3) ◽  
pp. 583-600
Author(s):  
Richard Ohene Asiedu ◽  
William Gyadu-Asiedu

Purpose This paper aims to focus on developing a baseline model for time overrun. Design/methodology/approach Information on 321 completed construction projects used to assess the predictive performance of two statistical techniques, namely, multiple regression and the Bayesian approach. Findings The eventual results from the Bayesian Markov chain Monte Carlo model were observed to improve the predictive ability of the model compared with multiple linear regression. Besides the unique nuances peculiar with projects executed, the scope factors initial duration, gross floor area and number of storeys have been observed to be stable predictors of time overrun. Originality/value This current model contributes to improving the reliability of predicting time overruns.


2020 ◽  
Vol 98 (Supplement_4) ◽  
pp. 178-178
Author(s):  
Arthur Francisco Araujo Fernandes ◽  
João R R Dorea ◽  
Bruno D Valente ◽  
Robert Fitzgerald ◽  
William O Herring ◽  
...  

Abstract The measurement of carcass traits in live pigs, such as muscle depth (MD) and backfat thickness (BF), is a topic of great interest for breeding companies and production farms. Breeding companies currently measure MD and BF using medical imaging technologies such as ultrasound (US). However, US is costly, requires trained personnel, and involves direct interaction with the animals, which is an added stressor. An interesting alternative in this regard is to use computer vision techniques. Farmers would also take advantage of such an application as they would be able to better adjust feed composition and delivery. Therefore, the objectives of this study were: (1) to develop a computer vision system for prediction of MD and BF from 3D images of finishing pigs; (2) to compare the predictive ability of statistical (multiple linear regression, partial least squares) and machine learning (elastic networks and artificial neural networks) approaches using features extracted from the images against a deep learning (DL) approach that uses the raw image as input. A dataset containing 3D images and ultrasound measurements of 618 pigs with average body weight of 120 kg, MD of 65 mm, and BF of 6 mm was used in this study. To assess the predictive performance of the different strategies, a 5-fold cross-validation approach was used. The DL achieved the best predictive performance for both traits, with predictive mean absolute scaled error (MASE) of 5.10% and 13.62%, root-mean-square error (RMSE) of 4.35mm and 1.10mm, and R2 of 0.51 and 0.45, for MD and BF respectively. In conclusion, it was demonstrated that it is possible to satisfactorily predict MD and BF using 3D images that were autonomously collected in farm conditions. Also, the best predictive quality was achieved by a DL approach, simplifying the data workflow as it uses raw 3D images as inputs.


Biomolecules ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1337
Author(s):  
Ruiyang Song ◽  
Baixin Cao ◽  
Zhenling Peng ◽  
Christopher J. Oldfield ◽  
Lukasz Kurgan ◽  
...  

Non-synonymous single nucleotide polymorphisms (nsSNPs) may result in pathogenic changes that are associated with human diseases. Accurate prediction of these deleterious nsSNPs is in high demand. The existing predictors of deleterious nsSNPs secure modest levels of predictive performance, leaving room for improvements. We propose a new sequence-based predictor, DMBS, which addresses the need to improve the predictive quality. The design of DMBS relies on the observation that the deleterious mutations are likely to occur at the highly conserved and functionally important positions in the protein sequence. Correspondingly, we introduce two innovative components. First, we improve the estimates of the conservation computed from the multiple sequence profiles based on two complementary databases and two complementary alignment algorithms. Second, we utilize putative annotations of functional/binding residues produced by two state-of-the-art sequence-based methods. These inputs are processed by a random forests model that provides favorable predictive performance when empirically compared against five other machine-learning algorithms. Empirical results on four benchmark datasets reveal that DMBS achieves AUC > 0.94, outperforming current methods, including protein structure-based approaches. In particular, DMBS secures AUC = 0.97 for the SNPdbe and ExoVar datasets, compared to AUC = 0.70 and 0.88, respectively, that were obtained by the best available methods. Further tests on the independent HumVar dataset shows that our method significantly outperforms the state-of-the-art method SNPdryad. We conclude that DMBS provides accurate predictions that can effectively guide wet-lab experiments in a high-throughput manner.


2017 ◽  
Author(s):  
Dan Lu ◽  
Daniel Ricciuto ◽  
Anthony Walker ◽  
Cosmin Safta ◽  
William Munger

Abstract. Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this study, a Differential Evolution Adaptive Metropolis (DREAM) algorithm was used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The DREAM is a multi-chain method and uses differential evolution technique for chain movement, allowing it to be efficiently applied to high-dimensional problems, and can reliably estimate heavy-tailed and multimodal distributions that are difficult for single-chain schemes using a Gaussian proposal distribution. The results were evaluated against the popular Adaptive Metropolis (AM) scheme. DREAM indicated that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identified one mode. The calibration of DREAM resulted in a better model fit and predictive performance compared to the AM. DREAM provides means for a good exploration of the posterior distributions of model parameters. It reduces the risk of false convergence to a local optimum and potentially improves the predictive performance of the calibrated model.


Sign in / Sign up

Export Citation Format

Share Document