scholarly journals Does Quantile Mapping of Simulated Precipitation Correct for Biases in Transition Probabilities and Spell Lengths?

2016 ◽  
Vol 29 (5) ◽  
pp. 1605-1615 ◽  
Author(s):  
Jan Rajczak ◽  
Sven Kotlarski ◽  
Christoph Schär

Abstract Climate impact studies constitute the basis for the formulation of adaptation strategies. Usually such assessments apply statistically postprocessed output of climate model projections to force impact models. Increasingly, time series with daily resolution are used, which require high consistency, for instance with respect to transition probabilities (TPs) between wet and dry days and spell durations. However, both climate models and commonly applied statistical tools have considerable uncertainties and drawbacks. This paper compares the ability of 1) raw regional climate model (RCM) output, 2) bias-corrected RCM output, and 3) a conventional weather generator (WG) that has been calibrated to match observed TPs to simulate the sequence of dry, wet, and very wet days at a set of long-term weather stations across Switzerland. The study finds systematic biases in TPs and spell lengths for raw RCM output, but a substantial improvement after bias correction using the deterministic quantile mapping technique. For the region considered, bias-corrected climate model output agrees well with observations in terms of TPs as well as dry and wet spell durations. For the majority of cases (models and stations) bias-corrected climate model output is similar in skill to a simple Markov chain stochastic weather generator. There is strong evidence that bias-corrected climate model simulations capture the atmospheric event sequence more realistically than a simple WG.

2021 ◽  
Vol 60 (4) ◽  
pp. 455-475
Author(s):  
Maike F. Holthuijzen ◽  
Brian Beckage ◽  
Patrick J. Clemins ◽  
Dave Higdon ◽  
Jonathan M. Winter

AbstractHigh-resolution, bias-corrected climate data are necessary for climate impact studies at local scales. Gridded historical data are convenient for bias correction but may contain biases resulting from interpolation. Long-term, quality-controlled station data are generally superior climatological measurements, but because the distribution of climate stations is irregular, station data are challenging to incorporate into downscaling and bias-correction approaches. Here, we compared six novel methods for constructing full-coverage, high-resolution, bias-corrected climate products using daily maximum temperature simulations from a regional climate model (RCM). Only station data were used for bias correction. We quantified performance of the six methods with the root-mean-square-error (RMSE) and Perkins skill score (PSS) and used two ANOVA models to analyze how performance varied among methods. We validated the six methods using two calibration periods of observed data (1980–89 and 1980–2014) and two testing sets of RCM data (1990–2014 and 1980–2014). RMSE for all methods varied throughout the year and was larger in cold months, whereas PSS was more consistent. Quantile-mapping bias-correction techniques substantially improved PSS, while simple linear transfer functions performed best in improving RMSE. For the 1980–89 calibration period, simple quantile-mapping techniques outperformed empirical quantile mapping (EQM) in improving PSS. When calibration and testing time periods were equivalent, EQM resulted in the largest improvements in PSS. No one method performed best in both RMSE and PSS. Our results indicate that simple quantile-mapping techniques are less prone to overfitting than EQM and are suitable for processing future climate model output, whereas EQM is ideal for bias correcting historical climate model output.


2021 ◽  
Author(s):  
Michael Steininger ◽  
Daniel Abel ◽  
Katrin Ziegler ◽  
Anna Krause ◽  
Heiko Paeth ◽  
...  

<p>Climate models are an important tool for the assessment of prospective climate change effects but they suffer from systematic and representation errors, especially for precipitation. Model output statistics (MOS) reduce these errors by fitting the model output to observational data with machine learning. In this work, we explore the feasibility and potential of deep learning with convolutional neural networks (CNNs) for MOS. We propose the CNN architecture ConvMOS specifically designed for reducing errors in climate model outputs and apply it to the climate model REMO. Our results show a considerable reduction of errors and mostly improved performance compared to three commonly used MOS approaches.</p>


2017 ◽  
Vol 10 (12) ◽  
pp. 4563-4575 ◽  
Author(s):  
Jared Lewis ◽  
Greg E. Bodeker ◽  
Stefanie Kremser ◽  
Andrew Tait

Abstract. A method, based on climate pattern scaling, has been developed to expand a small number of projections of fields of a selected climate variable (X) into an ensemble that encapsulates a wide range of indicative model structural uncertainties. The method described in this paper is referred to as the Ensemble Projections Incorporating Climate model uncertainty (EPIC) method. Each ensemble member is constructed by adding contributions from (1) a climatology derived from observations that represents the time-invariant part of the signal; (2) a contribution from forced changes in X, where those changes can be statistically related to changes in global mean surface temperature (Tglobal); and (3) a contribution from unforced variability that is generated by a stochastic weather generator. The patterns of unforced variability are also allowed to respond to changes in Tglobal. The statistical relationships between changes in X (and its patterns of variability) and Tglobal are obtained in a training phase. Then, in an implementation phase, 190 simulations of Tglobal are generated using a simple climate model tuned to emulate 19 different global climate models (GCMs) and 10 different carbon cycle models. Using the generated Tglobal time series and the correlation between the forced changes in X and Tglobal, obtained in the training phase, the forced change in the X field can be generated many times using Monte Carlo analysis. A stochastic weather generator is used to generate realistic representations of weather which include spatial coherence. Because GCMs and regional climate models (RCMs) are less likely to correctly represent unforced variability compared to observations, the stochastic weather generator takes as input measures of variability derived from observations, but also responds to forced changes in climate in a way that is consistent with the RCM projections. This approach to generating a large ensemble of projections is many orders of magnitude more computationally efficient than running multiple GCM or RCM simulations. Such a large ensemble of projections permits a description of a probability density function (PDF) of future climate states rather than a small number of individual story lines within that PDF, which may not be representative of the PDF as a whole; the EPIC method largely corrects for such potential sampling biases. The method is useful for providing projections of changes in climate to users wishing to investigate the impacts and implications of climate change in a probabilistic way. A web-based tool, using the EPIC method to provide probabilistic projections of changes in daily maximum and minimum temperatures for New Zealand, has been developed and is described in this paper.


2006 ◽  
Vol 134 (7) ◽  
pp. 1859-1879 ◽  
Author(s):  
Heiko Paeth ◽  
Robin Girmes ◽  
Gunter Menz ◽  
Andreas Hense

Abstract Seasonal forecast of climate anomalies holds the prospect of improving agricultural planning and food security, particularly in the low latitudes where rainfall represents a limiting factor in agrarian production. Present-day methods are usually based on simulated precipitation as a predictor for the forthcoming rainy season. However, climate models often have low skill in predicting rainfall due to the uncertainties in physical parameterization. Here, the authors present an extended statistical model approach using three-dimensional dynamical variables from climate model experiments like temperature, geopotential height, wind components, and atmospheric moisture. A cross-validated multiple regression analysis is applied in order to fit the model output to observed seasonal precipitation during the twentieth century. This model output statistics (MOS) system is evaluated in various regions of the globe with potential predictability and compared with the conventional superensemble approach, which refers to the same variable for predictand and predictors. It is found that predictability is highest in the low latitudes. Given the remarkable spatial teleconnections in the Tropics, a large number of dynamical predictors can be determined for each region of interest. To avoid overfitting in the regression model an EOF analysis is carried out, combining predictors that are largely in-phase with each other. In addition, a bootstrap approach is used to evaluate the predictability of the statistical model. As measured by different skill scores, the MOS system reaches much higher explained variance than the superensemble approach in all considered regions. In some cases, predictability only occurs if dynamical predictor variables are taken into account, whereas the superensemble forecast fails. The best results are found for the tropical Pacific sector, the Nordeste region, Central America, and tropical Africa, amounting to 50% to 80% of total interannual variability. In general, the statistical relationships between the leading predictors and the predictand are physically interpretable and basically highlight the interplay between regional climate anomalies and the omnipresent role of El Niño–Southern Oscillation in the tropical climate system.


2017 ◽  
Author(s):  
Jared Lewis ◽  
Greg E. Bodeker ◽  
Andrew Tait ◽  
Stefanie Kremser

Abstract. A method, based on climate pattern-scaling, has been developed to expand a small number of projections of fields of a selected climate variable (X) into an ensemble that encapsulates a wide range of model structural uncertainties. The method described in this paper is referred to as the Ensemble Projections Incorporating Climate model uncertainty (EPIC) method. Each ensemble member is constructed by adding contributions from (1) a climatology derived from observations that represents the time invariant part of the signal, (2) a contribution from forced changes in X where those changes can be statistically related to changes in global mean surface temperature (Tglobal), and (3) a contribution from unforced variability that is generated by a stochastic weather generator. The patterns of unforced variability are also allowed to respond to changes in Tglobal. The statistical relationships between changes in X (and its patterns of variability) with Tglobal are obtained in a "training" phase. Then, in an "implementation" phase, 190 simulations of Tglobal are generated using a simple climate model tuned to emulate 19 different Global Climate Models (GCMs) and 10 different carbon cycle models. Using the generated Tglobal time series and the correlation between the forced changes in X and Tglobal, obtained in the "training" phase, the forced change in the X field can be generated many times using Monte Carlo analysis. A stochastic weather generator model is used to generate realistic representations of weather which include spatial coherence. Because GCMs and Regional Climate Models (RCMs) are less likely to correctly represent unforced variability compared to observations, the stochastic weather generator takes as input measures of variability derived from observations, but also responds to forced changes in climate in a way that is consistent with the RCM projections. This approach to generating a large ensemble of projections is many orders of magnitude more computationally efficient than running multiple GCM or RCM simulations. Such a large ensemble of projections permits a description of a Probability Density Function (PDF) of future climate states rather than a small number of individual story lines within that PDF which may not be representative of the PDF as a whole; the EPIC method corrects for such potential sampling biases. The method is useful for providing projections of changes in climate to users wishing to investigate the impacts and implications of climate change in a probabilistic way. A web-based tool, using the EPIC method to provide probabilistic projections of changes in daily maximum and minimum temperatures for New Zealand, has been developed and is described in this paper.


2016 ◽  
Vol 20 (2) ◽  
pp. 685-696 ◽  
Author(s):  
E. P. Maurer ◽  
D. L. Ficklin ◽  
W. Wang

Abstract. Statistical downscaling is a commonly used technique for translating large-scale climate model output to a scale appropriate for assessing impacts. To ensure downscaled meteorology can be used in climate impact studies, downscaling must correct biases in the large-scale signal. A simple and generally effective method for accommodating systematic biases in large-scale model output is quantile mapping, which has been applied to many variables and shown to reduce biases on average, even in the presence of non-stationarity. Quantile-mapping bias correction has been applied at spatial scales ranging from hundreds of kilometers to individual points, such as weather station locations. Since water resources and other models used to simulate climate impacts are sensitive to biases in input meteorology, there is a motivation to apply bias correction at a scale fine enough that the downscaled data closely resemble historically observed data, though past work has identified undesirable consequences to applying quantile mapping at too fine a scale. This study explores the role of the spatial scale at which the quantile-mapping bias correction is applied, in the context of estimating high and low daily streamflows across the western United States. We vary the spatial scale at which quantile-mapping bias correction is performed from 2° ( ∼  200 km) to 1∕8° ( ∼  12 km) within a statistical downscaling procedure, and use the downscaled daily precipitation and temperature to drive a hydrology model. We find that little additional benefit is obtained, and some skill is degraded, when using quantile mapping at scales finer than approximately 0.5° ( ∼  50 km). This can provide guidance to those applying the quantile-mapping bias correction method for hydrologic impacts analysis.


2013 ◽  
Vol 94 (5) ◽  
pp. 623-627 ◽  
Author(s):  
Eric Guilyardi ◽  
V. Balaji ◽  
Bryan Lawrence ◽  
Sarah Callaghan ◽  
Cecelia Deluca ◽  
...  

The results of climate models are of increasing and widespread importance. No longer is climate model output of sole interest to climate scientists and researchers in the climate change impacts and adaptation fields. Now nonspecialists such as government officials, policy makers, and the general public all have an increasing need to access climate model output and understand its implications. For this host of users, accurate and complete metadata (i.e., information about how and why the data were produced) is required to document the climate modeling results. Here we describe a pilot community initiative to collect and make available documentation of climate models and their simulations. In an initial application, a metadata repository is being established to provide information of this kind for a major internationally coordinated modeling activity known as CMIP5 (Coupled Model Intercomparison Project, Phase 5). It is expected that for a wide range of stakeholders, this and similar community-managed metadata repositories will spur development of analysis tools that facilitate discovery and exploitation of Earth system simulations.


2015 ◽  
Vol 12 (10) ◽  
pp. 10893-10920 ◽  
Author(s):  
E. P. Maurer ◽  
D. L. Ficklin ◽  
W. Wang

Abstract. Statistical downscaling is a commonly used technique for translating large-scale climate model output to a scale appropriate for assessing impacts. To ensure downscaled meteorology can be used in climate impact studies, downscaling must correct biases in the large-scale signal. A simple and generally effective method for accommodating systematic biases in large-scale model output is quantile mapping, which has been applied to many variables and shown to reduce biases on average, even in the presence of non-stationarity. Quantile mapping bias correction has been applied at spatial scales ranging from areas of hundreds of kilometers to individual points, such as weather station locations. Since water resources and other models used to simulate climate impacts are sensitive to biases in input meteorology, there is a motivation to apply bias correction at a scale fine enough that the downscaled data closely resembles historically observed data, though past work has identified undesirable consequences to applying quantile mapping at too fine a scale. This study explores the role of the spatial scale at which the quantile-mapping bias correction is applied, in the context of estimating high and low daily streamflows across the Western United States. We vary the spatial scale at which quantile mapping bias correction is performed from 2° (∼ 200 km) to 1/8° (∼ 12 km) within a statistical downscaling procedure, and use the downscaled daily precipitation and temperature to drive a hydrology model. We find that little additional benefit is obtained, and some skill is degraded, when using quantile mapping at scales finer than approximately 0.5° (∼ 50 km). This can provide guidance to those applying the quantile mapping bias correction method for hydrologic impacts analysis.


2016 ◽  
Vol 29 (19) ◽  
pp. 7045-7064 ◽  
Author(s):  
Alex J. Cannon

Abstract Univariate bias correction algorithms, such as quantile mapping, are used to address systematic biases in climate model output. Intervariable dependence structure (e.g., between different quantities like temperature and precipitation or between sites) is typically ignored, which can have an impact on subsequent calculations that depend on multiple climate variables. A novel multivariate bias correction (MBC) algorithm is introduced as a multidimensional analog of univariate quantile mapping. Two variants are presented. MBCp and MBCr respectively correct Pearson correlation and Spearman rank correlation dependence structure, with marginal distributions in both constrained to match observed distributions via quantile mapping. MBC is demonstrated on two case studies: 1) bivariate bias correction of monthly temperature and precipitation output from a large ensemble of climate models and 2) multivariate correction of vertical humidity and wind profiles, including subsequent calculation of vertically integrated water vapor transport and detection of atmospheric rivers. The energy distance is recommended as an omnibus measure of performance for model selection. As expected, substantial improvements in performance relative to quantile mapping are found in each case. For reference, characteristics of the MBC algorithm are compared against existing bivariate and multivariate bias correction techniques. MBC performs competitively and fills a role as a flexible, general purpose multivariate bias correction algorithm.


Sign in / Sign up

Export Citation Format

Share Document