Allowing a wildfire to burn: estimating the effect on future fire suppression costs

2013 ◽  
Vol 22 (7) ◽  
pp. 871 ◽  
Author(s):  
Rachel M. Houtman ◽  
Claire A. Montgomery ◽  
Aaron R. Gagnon ◽  
David E. Calkin ◽  
Thomas G. Dietterich ◽  
...  

Where a legacy of aggressive wildland fire suppression has left forests in need of fuel reduction, allowing wildland fire to burn may provide fuel treatment benefits, thereby reducing suppression costs from subsequent fires. The least-cost-plus-net-value-change model of wildland fire economics includes benefits of wildfire in a framework for evaluating suppression options. In this study, we estimated one component of that benefit – the expected present value of the reduction in suppression costs for subsequent fires arising from the fuel treatment effect of a current fire. To that end, we employed Monte Carlo methods to generate a set of scenarios for subsequent fire ignition and weather events, which are referred to as sample paths, for a study area in central Oregon. We simulated fire on the landscape over a 100-year time horizon using existing models of fire behaviour, vegetation and fuels development, and suppression effectiveness, and we estimated suppression costs using an existing suppression cost model. Our estimates suggest that the potential cost savings may be substantial. Further research is needed to estimate the full least-cost-plus-net-value-change model. This line of research will extend the set of tools available for developing wildfire management plans for forested landscapes.

2010 ◽  
Vol 19 (2) ◽  
pp. 238 ◽  
Author(s):  
William E. Mell ◽  
Samuel L. Manzello ◽  
Alexander Maranghides ◽  
David Butry ◽  
Ronald G. Rehm

Wildfires that spread into wildland–urban interface (WUI) communities present significant challenges on several fronts. In the United States, the WUI accounts for a significant portion of wildland fire suppression and wildland fuel treatment costs. Methods to reduce structure losses are focussed on fuel treatments in either wildland fuels or residential fuels. There is a need for a well-characterised, systematic testing of these approaches across a range of community and structure types and fire conditions. Laboratory experiments, field measurements and fire behaviour models can be used to better determine the exposure conditions faced by communities and structures. The outcome of such an effort would be proven fuel treatment techniques for wildland and residential fuels, risk assessment strategies, economic cost analysis models, and test methods with representative exposure conditions for fire-resistant building designs and materials.


2015 ◽  
Vol 69 (3) ◽  
pp. 164-170 ◽  
Author(s):  
Matthew P. Thompson ◽  
Nathaniel M. Anderson

2006 ◽  
Vol 21 (4) ◽  
pp. 217-221 ◽  
Author(s):  
David Calkin ◽  
Krista Gebert

Abstract Years of successful fire suppression have led to high fuel loads on the nation's forests, and steps are being taken by the nation's land management agencies to reduce these fuel loads. However, to achieve desired outcomes in a fiscally responsible manner, the cost and effectiveness in reducing losses due to wildland fire of different fuel treatments in different forest settings must be understood. Currently, prioritizing fuel treatment activities and planning budget expenditures is limited by a lack of accurate cost data. The primary objective of this research was to develop regression models that may be used to estimate the cost of hazardous fuel reduction treatments based on USDA Forest Service Region, biophysical setting, treatment type, and design. A survey instrument was used to obtain activity-specific information directly from fire management officers at Forest Service Ranger Districts for treatments occurring between 2001 and 2003. For both prescribed burns and mechanical activities, treatment size described the largest amount of variation in cost per acre, with increased size reducing cost per acre, on average. We confirmed that data on Forest Service fuel treatment activities maintained in the National Fire Plan Operations and Reporting System were not sufficiently accurate for reasonable cost analysis and modeling.


Author(s):  
Al Chen ◽  
Karen Nunez

The bulk chemical industry primarily relies upon truck transportation. Truck transportation, although costly, has a high percentage of on-time deliveries. Cheaper alternative transportation modes are less preferred due to a lack of supply chain information. Traditionally, the lack of information about in-transit products leads to higher safety stock and inventory levels, which results in higher costs. Process mapping and activity-based cost analysis are used to identify cost drivers and highlight the areas of opportunity for improving bulk chemical supply chain management. The activity-based cost information was used to develop a logistics cost model specifically tailored to the bulk chemical industry. The activity-based logistics cost model was used to assess potential cost savings from integrating centralized supply chain management software (visibility solutions), into the bulk chemical supply chain. The results of our analysis support integrating visibility solution software into multi-modal transportation to improve bulk chemical supply chain management. Integration of visibility solutions enables suppliers to improve their ability to monitor and control their inventory throughout the supply chain, increase overall asset utilization, and reduce global supply chain costs.


2021 ◽  
Author(s):  
◽  
Ryan Chard

<p>Cloud computing provides access to a large scale set of readily available computing resources at the click of a button. The cloud paradigm has commoditised computing capacity and is often touted as a low-cost model for executing and scaling applications. However, there are significant technical challenges associated with selecting, acquiring, configuring, and managing cloud resources which can restrict the efficient utilisation of cloud capabilities.  Scientific computing is increasingly hosted on cloud infrastructure—in which scientific capabilities are delivered to the broad scientific community via Internet-accessible services. This migration from on-premise to on-demand cloud infrastructure is motivated by the sporadic usage patterns of scientific workloads and the associated potential cost savings without the need to purchase, operate, and manage compute infrastructure—a task that few scientific users are trained to perform. However, cloud platforms are not an automatic solution. Their flexibility is derived from an enormous number of services and configuration options, which in turn result in significant complexity for the user. In fact, naïve cloud usage can result in poor performance and excessive costs, which are then directly passed on to researchers.  This thesis presents methods for developing efficient cloud-based scientific services. Three real-world scientific services are analysed and a set of common requirements are derived. To address these requirements, this thesis explores automated and scalable methods for inferring network performance, considers various trade-offs (e.g., cost and performance) when provisioning instances, and profiles application performance, all in heterogeneous and dynamic cloud environments. Specifically, network tomography provides the mechanisms to infer network performance in dynamic and opaque cloud networks; cost-aware automated provisioning approaches enable services to consider, in real-time, various trade-offs such as cost, performance, and reliability; and automated application profiling allows a huge search space of applications, instance types, and configurations to be analysed to determine resource requirements and application performance. Finally, these contributions are integrated into an extensible and modular cloud provisioning and resource management service called SCRIMP. Cloud-based scientific applications and services can subscribe to SCRIMP to outsource their provisioning, usage, and management of cloud infrastructures. Collectively, the approaches presented in this thesis are shown to provide order of magnitude cost savings and significant performance improvement when employed by production scientific services.</p>


2021 ◽  
Vol 6 (1) ◽  
pp. 203-220
Author(s):  
Gesine Wanke ◽  
Leonardo Bergami ◽  
Frederik Zahle ◽  
David Robert Verelst

Abstract. Within this work, an existing model of a Suzlon S111 2.1 MW turbine is used to estimate potential cost savings when the conventional upwind rotor concept is changed into a downwind rotor concept. A design framework is used to get realistic design updates for the upwind configuration, as well as two design updates for the downwind configuration, including a pure material cost out of the rotor blades and a new planform design. A full design load basis according to the standard has been used to evaluate the impact of the redesigns on the loads. A detailed cost model with load scaling is used to estimate the impact of the design changes on the turbine costs and the cost of energy. It is shown that generally lower blade mass of up to 5 % less than the upwind redesign can be achieved with the downwind configurations. Compared to an upwind baseline, the upwind redesign shows an estimated cost of energy reduction of 2.3 %, and the downwind designs achieve a maximum reduction of 1.3 %.


2019 ◽  
Vol 28 (7) ◽  
pp. 533 ◽  
Author(s):  
Robert E. Keane ◽  
Kathy Gray ◽  
Brett Davis ◽  
Lisa M. Holsinger ◽  
Rachel Loehman

Continued suppression of wildfires may allow more biomass to accumulate to foster even more intense fires. Enlightened fire management involves explicitly determining concurrent levels of suppression, wildland fire use (allowing some fires to burn) and fuel treatments to manage landscapes for ecological resilience. This study used the mechanistic landscape model FireBGCv2 to simulate ecological dynamics on three landscapes in the US northern Rocky Mountains to determine responses of seven management-oriented variables over a gradient of 10 fire suppression levels under two climate and four fuel treatment scenarios. We used a historical range and variation (HRV) time series of the seven variables individually and merged together as a Principal Components factor (PC1) to define the envelope that represents ecological resiliency and compared all simulations with the HRV base case. We found that under today’s climates, using the PC1 factor, ecological resilience was maintained while suppressing 30–90% of wildfires depending on the landscape. We also found fuel treatments might allow higher suppression levels to occur and still maintain resilience. Other findings indicate that each landscape must be individually evaluated to determine the right mix of wildfires, wildland fire use and fuel treatments depending on the response variables used to evaluate resilience.


1975 ◽  
Vol 97 (4) ◽  
pp. 1395-1398 ◽  
Author(s):  
D. Wilde ◽  
E. Prentice

The least cost allocation of sure-fit machine tolerances for Speckhart’s exponential cost model is solved in closed form, without numerical iteration, as a geometric program with zero degrees of difficulty. The results show the importance of an exponential cost sensitivity parameter defined as the “characteristic tolerance”. The theoretical minimum cost can be determined without specifying the corresponding tolerances. Specific minimum cost tolerances can be computed later in closed form if potential cost savings are significant.


2020 ◽  
Author(s):  
Gesine Wanke ◽  
Leonardo Bergami ◽  
Frederik Zahle ◽  
David Robert Verelst

Abstract. Within this work, an existing model of a Suzlon S111 2.1 MW turbine is used to estimate potential cost savings when the conventional upwind rotor concept is changed into a downwind rotor concept. A design framework is used to get realistic design updates for the upwind configuration as well as two design updates for the downwind configuration, including a pure material cost-out on the rotor blades and a new planform design. A full design load basis according to the standard has been used to evaluate the impact of the redesigns on the loads. A detailed cost model with load scaling is used to estimate the impact of the design changes on the turbine costs and the cost of energy. It is shown that generally lower blade mass can be achieved with the downwind configurations of up to 5 % less than the upwind redesign. Compared to an upwind baseline, the upwind redesign shows an estimated cost of energy reduction of 2.3 % where the downwind designs achieve a maximum reduction of 1.3 %.


2021 ◽  
Author(s):  
◽  
Ryan Chard

<p>Cloud computing provides access to a large scale set of readily available computing resources at the click of a button. The cloud paradigm has commoditised computing capacity and is often touted as a low-cost model for executing and scaling applications. However, there are significant technical challenges associated with selecting, acquiring, configuring, and managing cloud resources which can restrict the efficient utilisation of cloud capabilities.  Scientific computing is increasingly hosted on cloud infrastructure—in which scientific capabilities are delivered to the broad scientific community via Internet-accessible services. This migration from on-premise to on-demand cloud infrastructure is motivated by the sporadic usage patterns of scientific workloads and the associated potential cost savings without the need to purchase, operate, and manage compute infrastructure—a task that few scientific users are trained to perform. However, cloud platforms are not an automatic solution. Their flexibility is derived from an enormous number of services and configuration options, which in turn result in significant complexity for the user. In fact, naïve cloud usage can result in poor performance and excessive costs, which are then directly passed on to researchers.  This thesis presents methods for developing efficient cloud-based scientific services. Three real-world scientific services are analysed and a set of common requirements are derived. To address these requirements, this thesis explores automated and scalable methods for inferring network performance, considers various trade-offs (e.g., cost and performance) when provisioning instances, and profiles application performance, all in heterogeneous and dynamic cloud environments. Specifically, network tomography provides the mechanisms to infer network performance in dynamic and opaque cloud networks; cost-aware automated provisioning approaches enable services to consider, in real-time, various trade-offs such as cost, performance, and reliability; and automated application profiling allows a huge search space of applications, instance types, and configurations to be analysed to determine resource requirements and application performance. Finally, these contributions are integrated into an extensible and modular cloud provisioning and resource management service called SCRIMP. Cloud-based scientific applications and services can subscribe to SCRIMP to outsource their provisioning, usage, and management of cloud infrastructures. Collectively, the approaches presented in this thesis are shown to provide order of magnitude cost savings and significant performance improvement when employed by production scientific services.</p>


Sign in / Sign up

Export Citation Format

Share Document