Relationships between climate and macroscale area burned in the western United States

2013 ◽  
Vol 22 (7) ◽  
pp. 1003 ◽  
Author(s):  
John T. Abatzoglou ◽  
Crystal A. Kolden

Increased wildfire activity (e.g. number of starts, area burned, fire behaviour) across the western United States in recent decades has heightened interest in resolving climate–fire relationships. Macroscale climate–fire relationships were examined in forested and non-forested lands for eight Geographic Area Coordination Centers in the western United States, using area burned derived from the Monitoring Trends in Burn Severity dataset (1984–2010). Fire-specific biophysical variables including fire danger and water balance metrics were considered in addition to standard climate variables of monthly temperature, precipitation and drought indices to explicitly determine their optimal capacity to explain interannual variability in area burned. Biophysical variables tied to the depletion of fuel and soil moisture and prolonged periods of elevated fire-danger had stronger correlations to area burned than standard variables antecedent to or during the fire season, particularly in forested systems. Antecedent climate–fire relationships exhibited inter-region commonality with area burned in forested lands correlated with winter snow water equivalent and emergent drought in late spring. Area burned in non-forested lands correlated with moisture availability in the growing season preceding the fire year. Despite differences in the role of antecedent climate in preconditioning fuels, synchronous regional fire activity in forested and non-forested lands suggests that atmospheric conditions during the fire season unify fire activity and can compound or supersede antecedent climatic stressors. Collectively, climate–fire relationships viewed through the lens of biophysical variables provide a more direct link to fuel flammability and wildfire activity than standard climate variables, thereby narrowing the gap in incorporating top-down climatic factors between empirical and process-based fire models.

2018 ◽  
Vol 115 (36) ◽  
pp. E8349-E8357 ◽  
Author(s):  
Zachary A. Holden ◽  
Alan Swanson ◽  
Charles H. Luce ◽  
W. Matt Jolly ◽  
Marco Maneta ◽  
...  

Western United States wildfire increases have been generally attributed to warming temperatures, either through effects on winter snowpack or summer evaporation. However, near-surface air temperature and evaporative demand are strongly influenced by moisture availability and these interactions and their role in regulating fire activity have never been fully explored. Here we show that previously unnoted declines in summer precipitation from 1979 to 2016 across 31–45% of the forested areas in the western United States are strongly associated with burned area variations. The number of wetting rain days (WRD; days with precipitation ≥2.54 mm) during the fire season partially regulated the temperature and subsequent vapor pressure deficit (VPD) previously implicated as a primary driver of annual wildfire area burned. We use path analysis to decompose the relative influence of declining snowpack, rising temperatures, and declining precipitation on observed fire activity increases. After accounting for interactions, the net effect of WRD anomalies on wildfire area burned was more than 2.5 times greater than the net effect of VPD, and both the WRD and VPD effects were substantially greater than the influence of winter snowpack. These results suggest that precipitation during the fire season exerts the strongest control on burned area either directly through its wetting effects or indirectly through feedbacks to VPD. If these trends persist, decreases in summer precipitation and the associated summertime aridity increases would lead to more burned area across the western United States with far-reaching ecological and socioeconomic impacts.


2021 ◽  
Author(s):  
Joseph T Smith ◽  
Brady W Allred ◽  
Chad S Boyd ◽  
Kirk W Davies ◽  
Matthew O. Jones ◽  
...  

Wildfires are a growing management concern in western US rangelands, where invasive annual grasses have altered fire regimes and contributed to an increased incidence of catastrophic large wildfires. Fire activity in arid, non-forested regions is thought to be largely controlled by interannual variation in fuel amount, which in turn is controlled by antecedent weather. Thus, long-range forecasting of fire activity in rangelands should be feasible given annual estimates of fuel quantity. Using a 32 yr time series of spatial data, we employ machine learning algorithms to predict the relative probability of large (>400 ha) wildfire in the Great Basin based on fine-scale annual and 16-day estimates of cover and production of vegetation functional groups, weather, and multitemporal scale drought indices. We evaluate the predictive utility of these models with a leave-one-year-out cross-validation, building spatial forecasts of fire probability for each year that we compare against actual maps of large wildfires. Herbaceous vegetation aboveground biomass production, bare ground cover, and long-term drought indices were the most important predictors of fire probability. Across 32 fire seasons, >80% of the area burned in large wildfires coincided with predicted fire probabilities ≥0.5. At the scale of the Great Basin, several metrics of fire season severity were moderately to strongly correlated with average fire probability, including total area burned in large wildfires, number of large wildfires, and average and maximum fire size. Our findings show that recent years of exceptional fire activity in the Great Basin were predictable based on antecedent weather and biomass of fine fuels, and reveal a significant increasing trend in fire probability over the last three decades driven by widespread changes in fine fuel characteristics.


2020 ◽  
Vol 12 (8) ◽  
pp. 1252 ◽  
Author(s):  
Alireza Farahmand ◽  
E. Natasha Stavros ◽  
John T. Reager ◽  
Ali Behrangi

Wildfire danger assessment is essential for operational allocation of fire management resources; with longer lead prediction, the more efficiently can resources be allocated regionally. Traditional studies focus on meteorological forecasts and fire danger index models (e.g., National Fire Danger Rating System—NFDRS) for predicting fire danger. Meteorological forecasts, however, lose accuracy beyond ~10 days; as such, there is no quantifiable method for predicting fire danger beyond 10 days. While some recent studies have statistically related hydrologic parameters and past wildfire area burned or occurrence to fire, no study has used these parameters to develop a monthly spatially distributed predictive model in the contiguous United States. Thus, the objective of this study is to introduce Fire Danger from Earth Observations (FDEO), which uses satellite data over the contiguous United States (CONUS) to enable two-month lead time prediction of wildfire danger, a sufficient lead time for planning purposes and relocating resources. In this study, we use satellite observations of land cover type, vapor pressure deficit, surface soil moisture, and the enhanced vegetation index, together with the United States Forest Service (USFS) verified and validated fire database (FPA) to develop spatially gridded probabilistic predictions of fire danger, defined as expected area burned as a deviation from “normal”. The results show that the model predicts spatial patterns of fire danger with 52% overall accuracy over the 2004–2013 record, and up to 75% overall accuracy during the fire season. Overall accuracy is defined as number of pixels with correctly predicted fire probability classes divided by the total number of the studied pixels. This overall accuracy is the first quantified result of two-month lead prediction of fire danger and demonstrates the potential utility of using diverse observational data sets for use in operational fire management resource allocation in the CONUS.


2021 ◽  
Author(s):  
Kelly Mahoney ◽  
James D. Scott ◽  
Michael Alexander ◽  
Rachel McCrary ◽  
Mimi Hughes ◽  
...  

AbstractUnderstanding future precipitation changes is critical for water supply and flood risk applications in the western United States. The North American COordinated Regional Downscaling EXperiment (NA-CORDEX) matrix of global and regional climate models at multiple resolutions (~ 50-km and 25-km grid spacings) is used to evaluate mean monthly precipitation, extreme daily precipitation, and snow water equivalent (SWE) over the western United States, with a sub-regional focus on California. Results indicate significant model spread in mean monthly precipitation in several key water-sensitive areas in both historical and future projections, but suggest model agreement on increasing daily extreme precipitation magnitudes, decreasing seasonal snowpack, and a shortening of the wet season in California in particular. While the beginning and end of the California cool season are projected to dry according to most models, the core of the cool season (December, January, February) shows an overall wetter projected change pattern. Daily cool-season precipitation extremes generally increase for most models, particularly in California in the mid-winter months. Finally, a marked projected decrease in future seasonal SWE is found across all models, accompanied by earlier dates of maximum seasonal SWE, and thus a shortening of the period of snow cover as well. Results are discussed in the context of how the diverse model membership and variable resolutions offered by the NA-CORDEX ensemble can be best leveraged by stakeholders faced with future water planning challenges.


2018 ◽  
Vol 31 (24) ◽  
pp. 9921-9940 ◽  
Author(s):  
N. Goldenson ◽  
L. R. Leung ◽  
C. M. Bitz ◽  
E. Blanchard-Wrigglesworth

In the coastal mountains of western North America, most extreme precipitation is associated with atmospheric rivers (ARs), narrow bands of moisture originating in the tropics. Here we quantify how interannual variability in atmospheric rivers influences snowpack in the western United States in observations and a model. We simulate the historical climate with the Model for Prediction Across Scales (MPAS) with physics from the Community Atmosphere Model, version 5 [CAM5 (MPAS-CAM5)], using prescribed sea surface temperatures. In the global variable-resolution domain, regional refinement (at ~30 km) is applied to our region of interest and upwind over the northeast Pacific. To better characterize internal variability, we conduct simulations with three ensemble members over 30 years of the historical period. In the Cascade Range, with some exceptions, winters with more atmospheric river days are associated with less snowpack. In California’s Sierra Nevada, winters with more ARs are associated with greater snowpack. The slope of the linear regression of observed snow water equivalent (SWE) on reanalysis-based AR count has the same sign as that arrived at using the model, but is statistically significant in observations only for California. In spring, internal variance plays an important role in determining whether atmospheric river days appear to be associated with greater or less snowpack. The cumulative (winter through spring) number of atmospheric river days, on the other hand, has a relationship with spring snowpack, which is consistent across ensemble members. Thus, the impact of atmospheric rivers on winter snowpack has a greater influence on spring snowpack than spring atmospheric rivers in the model for both regions and in California consistently in observations.


2017 ◽  
Vol 18 (5) ◽  
pp. 1359-1374 ◽  
Author(s):  
Benjamin J. Hatchett ◽  
Susan Burak ◽  
Jonathan J. Rutz ◽  
Nina S. Oakley ◽  
Edward H. Bair ◽  
...  

Abstract The occurrence of atmospheric rivers (ARs) in association with avalanche fatalities is evaluated in the conterminous western United States between 1998 and 2014 using archived avalanche reports, atmospheric reanalysis products, an existing AR catalog, and weather station observations. AR conditions were present during or preceding 105 unique avalanche incidents resulting in 123 fatalities, thus comprising 31% of western U.S. avalanche fatalities. Coastal snow avalanche climates had the highest percentage of avalanche fatalities coinciding with AR conditions (31%–65%), followed by intermountain (25%–46%) and continental snow avalanche climates (<25%). Ratios of avalanche deaths during AR conditions to total AR days increased with distance from the coast. Frequent heavy to extreme precipitation (85th–99th percentile) during ARs favored critical snowpack loading rates with mean snow water equivalent increases of 46 mm. Results demonstrate that there exists regional consistency between snow avalanche climates, derived AR contributions to cool season precipitation, and percentages of avalanche fatalities during ARs. The intensity of water vapor transport and topographic corridors favoring inland water vapor transport may be used to help identify periods of increased avalanche hazard in intermountain and continental snow avalanche climates prior to AR landfall. Several recently developed AR forecast tools applicable to avalanche forecasting are highlighted.


2008 ◽  
Vol 9 (6) ◽  
pp. 1416-1426 ◽  
Author(s):  
Naoki Mizukami ◽  
Sanja Perica

Abstract Snow density is calculated as a ratio of snow water equivalent to snow depth. Until the late 1990s, there were no continuous simultaneous measurements of snow water equivalent and snow depth covering large areas. Because of that, spatiotemporal characteristics of snowpack density could not be well described. Since then, the Natural Resources Conservation Service (NRCS) has been collecting both types of data daily throughout the winter season at snowpack telemetry (SNOTEL) sites located in the mountainous areas of the western United States. This new dataset provided an opportunity to examine the spatiotemporal characteristics of snowpack density. The analysis of approximately seven years of data showed that at a given location and throughout the winter season, year-to-year snowpack density changes are significantly smaller than corresponding snow depth and snow water equivalent changes. As a result, reliable climatological estimates of snow density could be obtained from relatively short records. Snow density magnitudes and densification rates (i.e., rates at which snow densities change in time) were found to be location dependent. During early and midwinter, the densification rate is correlated with density. Starting in early or mid-March, however, snowpack density increases by approximately 2.0 kg m−3 day−1 regardless of location. Cluster analysis was used to obtain qualitative information on spatial patterns of snowpack density and densification rates. Four clusters were identified, each with a distinct density magnitude and densification rate. The most significant physiographic factor that discriminates between clusters was proximity to a large water body. Within individual mountain ranges, snowpack density characteristics were primarily dependent on elevation.


2001 ◽  
Vol 32 ◽  
pp. 135-140 ◽  
Author(s):  
K.W. Birkeland ◽  
C. J. Mock ◽  
J. J. Shinker

AbstractAvalanche forecasters can better anticipate avalanche extremes if they understand the relationships between those extremes and atmospheric circulation patterns. We investigated the relationship between extreme avalanche days and atmospheric circulation patterns at four sites in the western United States: Bridger Bowl, Montana; Jackson Hole, Wyoming; Alta, Utah; and Taos, New Mexico. For each site, we calculated a daily avalanche hazard index based on the number and size of avalanches, and we defined abnormal avalanche events as the top 10% of days with recorded avalanche activity. We assessed the influence of different variables on avalanche extremes, and found that high snow water equivalent and high snowfall correspond most closely to days of high avalanche hazard. Composite-anomaly maps of 500 hPa heights during those avalanche extremes clearly illustrate that spatial patterns of anomalous troughing prevail, though the exact position of the troughing varies between sites. These patterns can be explained by the topography of the western United States, and the low-elevation pathways for moisture that exist to the west of each of the sites. The methods developed for this research can be applied to other sites with long-term climate and avalanche databases to further our understanding of the spatial distribution of atmospheric patterns associated with extreme avalanche days.


2012 ◽  
Vol 16 (11) ◽  
pp. 1-15 ◽  
Author(s):  
Charles W. Lafon ◽  
Steven M. Quiring

Abstract Fire affects virtually all terrestrial ecosystems but occurs more commonly in some than in others. This paper investigates how climate, specifically the moisture regime, influences the flammability of different landscapes in the eastern United States. A previous study of spatial differences in fire regimes across the central Appalachian Mountains suggested that intra-annual precipitation variability influences fire occurrence more strongly than does total annual precipitation. The results presented here support that conclusion. The relationship of fire occurrence to moisture regime is also considered for the entire eastern United States. To do so, mean annual wildfire density and mean annual area burned were calculated for 34 national forests and parks representing the major vegetation and climatic conditions throughout the eastern forests. The relationship between fire activity and two climate variables was analyzed: mean annual moisture balance [precipitation P − potential evapotranspiration (PET)] and daily precipitation variability (coefficient of variability for daily precipitation). Fire activity is related to both climate variables but displays a stronger relationship with precipitation variability. The southeastern United States is particularly noteworthy for its high wildfire activity, which is associated with a warm, humid climate and a variable precipitation regime, which promote heavy fuel production and rapid drying of fuels.


Sign in / Sign up

Export Citation Format

Share Document