scholarly journals Stochastic Model Output Statistics for Bias Correcting and Downscaling Precipitation Including Extremes

2014 ◽  
Vol 27 (18) ◽  
pp. 6940-6959 ◽  
Author(s):  
Geraldine Wong ◽  
Douglas Maraun ◽  
Mathieu Vrac ◽  
Martin Widmann ◽  
Jonathan M. Eden ◽  
...  

Abstract Precipitation is highly variable in space and time; hence, rain gauge time series generally exhibit additional random small-scale variability compared to area averages. Therefore, differences between daily precipitation statistics simulated by climate models and gauge observations are generally not only caused by model biases, but also by the corresponding scale gap. Classical bias correction methods, in general, cannot bridge this gap; they do not account for small-scale random variability and may produce artifacts. Here, stochastic model output statistics is proposed as a bias correction framework to explicitly account for random small-scale variability. Daily precipitation simulated by a regional climate model (RCM) is employed to predict the probability distribution of local precipitation. The pairwise correspondence between predictor and predictand required for calibration is ensured by driving the RCM with perfect boundary conditions. Wet day probabilities are described by a logistic regression, and precipitation intensities are described by a mixture model consisting of a gamma distribution for moderate precipitation and a generalized Pareto distribution for extremes. The dependence of the model parameters on simulated precipitation is modeled by a vector generalized linear model. The proposed model effectively corrects systematic biases and correctly represents local-scale random variability for most gauges. Additionally, a simplified model is considered that disregards the separate tail model. This computationally efficient model proves to be a feasible alternative for precipitation up to moderately extreme intensities. The approach sets a new framework for bias correction that combines the advantages of weather generators and RCMs.

2013 ◽  
Vol 17 (11) ◽  
pp. 4481-4502 ◽  
Author(s):  
S. Hwang ◽  
W. D. Graham

Abstract. There are a number of statistical techniques that downscale coarse climate information from general circulation models (GCMs). However, many of them do not reproduce the small-scale spatial variability of precipitation exhibited by the observed meteorological data, which is an important factor for predicting hydrologic response to climatic forcing. In this study a new downscaling technique (Bias-Correction and Stochastic Analog method; BCSA) was developed to produce stochastic realizations of bias-corrected daily GCM precipitation fields that preserve both the spatial autocorrelation structure of observed daily precipitation sequences and the observed temporal frequency distribution of daily rainfall over space. We used the BCSA method to downscale 4 different daily GCM precipitation predictions from 1961 to 1999 over the state of Florida, and compared the skill of the method to results obtained with the commonly used bias-correction and spatial disaggregation (BCSD) approach, a modified version of BCSD which reverses the order of spatial disaggregation and bias-correction (SDBC), and the bias-correction and constructed analog (BCCA) method. Spatial and temporal statistics, transition probabilities, wet/dry spell lengths, spatial correlation indices, and variograms for wet (June through September) and dry (October through May) seasons were calculated for each method. Results showed that (1) BCCA underestimated mean daily precipitation for both wet and dry seasons while the BCSD, SDBC and BCSA methods accurately reproduced these characteristics, (2) the BCSD and BCCA methods underestimated temporal variability of daily precipitation and thus did not reproduce daily precipitation standard deviations, transition probabilities or wet/dry spell lengths as well as the SDBC and BCSA methods, and (3) the BCSD, BCCA and SDBC methods underestimated spatial variability in daily precipitation resulting in underprediction of spatial variance and overprediction of spatial correlation, whereas the new stochastic technique (BCSA) replicated observed spatial statistics for both the wet and dry seasons. This study underscores the need to carefully select a downscaling method that reproduces all precipitation characteristics important for the hydrologic system under consideration if local hydrologic impacts of climate variability and change are going to be reasonably predicted. For low-relief, rainfall-dominated watersheds, where reproducing small-scale spatiotemporal precipitation variability is important, the BCSA method is recommended for use over the BCSD, BCCA, or SDBC methods.


2021 ◽  
Vol 60 (4) ◽  
pp. 455-475
Author(s):  
Maike F. Holthuijzen ◽  
Brian Beckage ◽  
Patrick J. Clemins ◽  
Dave Higdon ◽  
Jonathan M. Winter

AbstractHigh-resolution, bias-corrected climate data are necessary for climate impact studies at local scales. Gridded historical data are convenient for bias correction but may contain biases resulting from interpolation. Long-term, quality-controlled station data are generally superior climatological measurements, but because the distribution of climate stations is irregular, station data are challenging to incorporate into downscaling and bias-correction approaches. Here, we compared six novel methods for constructing full-coverage, high-resolution, bias-corrected climate products using daily maximum temperature simulations from a regional climate model (RCM). Only station data were used for bias correction. We quantified performance of the six methods with the root-mean-square-error (RMSE) and Perkins skill score (PSS) and used two ANOVA models to analyze how performance varied among methods. We validated the six methods using two calibration periods of observed data (1980–89 and 1980–2014) and two testing sets of RCM data (1990–2014 and 1980–2014). RMSE for all methods varied throughout the year and was larger in cold months, whereas PSS was more consistent. Quantile-mapping bias-correction techniques substantially improved PSS, while simple linear transfer functions performed best in improving RMSE. For the 1980–89 calibration period, simple quantile-mapping techniques outperformed empirical quantile mapping (EQM) in improving PSS. When calibration and testing time periods were equivalent, EQM resulted in the largest improvements in PSS. No one method performed best in both RMSE and PSS. Our results indicate that simple quantile-mapping techniques are less prone to overfitting than EQM and are suitable for processing future climate model output, whereas EQM is ideal for bias correcting historical climate model output.


2021 ◽  
Author(s):  
Michael Steininger ◽  
Daniel Abel ◽  
Katrin Ziegler ◽  
Anna Krause ◽  
Heiko Paeth ◽  
...  

<p>Climate models are an important tool for the assessment of prospective climate change effects but they suffer from systematic and representation errors, especially for precipitation. Model output statistics (MOS) reduce these errors by fitting the model output to observational data with machine learning. In this work, we explore the feasibility and potential of deep learning with convolutional neural networks (CNNs) for MOS. We propose the CNN architecture ConvMOS specifically designed for reducing errors in climate model outputs and apply it to the climate model REMO. Our results show a considerable reduction of errors and mostly improved performance compared to three commonly used MOS approaches.</p>


2018 ◽  
Vol 31 (16) ◽  
pp. 6591-6610 ◽  
Author(s):  
Martin Aleksandrov Ivanov ◽  
Jürg Luterbacher ◽  
Sven Kotlarski

Climate change impact research and risk assessment require accurate estimates of the climate change signal (CCS). Raw climate model data include systematic biases that affect the CCS of high-impact variables such as daily precipitation and wind speed. This paper presents a novel, general, and extensible analytical theory of the effect of these biases on the CCS of the distribution mean and quantiles. The theory reveals that misrepresented model intensities and probability of nonzero (positive) events have the potential to distort raw model CCS estimates. We test the analytical description in a challenging application of bias correction and downscaling to daily precipitation over alpine terrain, where the output of 15 regional climate models (RCMs) is reduced to local weather stations. The theoretically predicted CCS modification well approximates the modification by the bias correction method, even for the station–RCM combinations with the largest absolute modifications. These results demonstrate that the CCS modification by bias correction is a direct consequence of removing model biases. Therefore, provided that application of intensity-dependent bias correction is scientifically appropriate, the CCS modification should be a desirable effect. The analytical theory can be used as a tool to 1) detect model biases with high potential to distort the CCS and 2) efficiently generate novel, improved CCS datasets. The latter are highly relevant for the development of appropriate climate change adaptation, mitigation, and resilience strategies. Future research needs to focus on developing process-based bias corrections that depend on simulated intensities rather than preserving the raw model CCS.


2016 ◽  
Vol 29 (5) ◽  
pp. 1605-1615 ◽  
Author(s):  
Jan Rajczak ◽  
Sven Kotlarski ◽  
Christoph Schär

Abstract Climate impact studies constitute the basis for the formulation of adaptation strategies. Usually such assessments apply statistically postprocessed output of climate model projections to force impact models. Increasingly, time series with daily resolution are used, which require high consistency, for instance with respect to transition probabilities (TPs) between wet and dry days and spell durations. However, both climate models and commonly applied statistical tools have considerable uncertainties and drawbacks. This paper compares the ability of 1) raw regional climate model (RCM) output, 2) bias-corrected RCM output, and 3) a conventional weather generator (WG) that has been calibrated to match observed TPs to simulate the sequence of dry, wet, and very wet days at a set of long-term weather stations across Switzerland. The study finds systematic biases in TPs and spell lengths for raw RCM output, but a substantial improvement after bias correction using the deterministic quantile mapping technique. For the region considered, bias-corrected climate model output agrees well with observations in terms of TPs as well as dry and wet spell durations. For the majority of cases (models and stations) bias-corrected climate model output is similar in skill to a simple Markov chain stochastic weather generator. There is strong evidence that bias-corrected climate model simulations capture the atmospheric event sequence more realistically than a simple WG.


2011 ◽  
Vol 24 (3) ◽  
pp. 867-880 ◽  
Author(s):  
Jouni Räisänen ◽  
Jussi S. Ylhäisi

Abstract The general decrease in the quality of climate model output with decreasing scale suggests a need for spatial smoothing to suppress the most unreliable small-scale features. However, even if correctly simulated, a large-scale average retained by the smoothing may not be representative of the local conditions, which are of primary interest in many impact studies. Here, the authors study this trade-off using simulations of temperature and precipitation by 24 climate models within the Third Coupled Model Intercomparison Project, to find the scale of smoothing at which the mean-square difference between smoothed model output and gridbox-scale reality is minimized. This is done for present-day time mean climate, recent temperature trends, and projections of future climate change, using cross validation between the models for the latter. The optimal scale depends strongly on the number of models used, being much smaller for multimodel means than for individual model simulations. It also depends on the variable considered and, in the case of climate change projections, the time horizon. For multimodel-mean climate change projections for the late twenty-first century, only very slight smoothing appears to be beneficial, and the resulting potential improvement is negligible for practical purposes. The use of smoothing as a means to improve the sampling for probabilistic climate change projections is also briefly explored.


2013 ◽  
Vol 26 (6) ◽  
pp. 2137-2143 ◽  
Author(s):  
Douglas Maraun

Abstract Quantile mapping is routinely applied to correct biases of regional climate model simulations compared to observational data. If the observations are of similar resolution as the regional climate model, quantile mapping is a feasible approach. However, if the observations are of much higher resolution, quantile mapping also attempts to bridge this scale mismatch. Here, it is shown for daily precipitation that such quantile mapping–based downscaling is not feasible but introduces similar problems as inflation of perfect prognosis (“prog”) downscaling: the spatial and temporal structure of the corrected time series is misrepresented, the drizzle effect for area means is overcorrected, area-mean extremes are overestimated, and trends are affected. To overcome these problems, stochastic bias correction is required.


2014 ◽  
Vol 27 (1) ◽  
pp. 312-324 ◽  
Author(s):  
Jonathan M. Eden ◽  
Martin Widmann

Abstract Producing reliable estimates of changes in precipitation at local and regional scales remains an important challenge in climate science. Statistical downscaling methods are often utilized to bridge the gap between the coarse resolution of general circulation models (GCMs) and the higher resolutions at which information is required by end users. As the skill of GCM precipitation, particularly in simulating temporal variability, is not fully understood, statistical downscaling typically adopts a perfect prognosis (PP) approach in which high-resolution precipitation projections are based on real-world statistical relationships between large-scale atmospheric predictors and local-scale precipitation. Using a nudged simulation of the ECHAM5 GCM, in which the large-scale weather states are forced toward observations of large-scale circulation and temperature for the period 1958–2001, previous work has shown ECHAM5 skill in simulating temporal variability of precipitation to be high in many parts of the world. Here, the same nudged simulation is used in an alternative downscaling approach, based on model output statistics (MOS), in which statistical corrections are derived for simulated precipitation. Cross-validated MOS corrections based on maximum covariance analysis (MCA) and principal component regression (PCR), in addition to a simple local scaling, are shown to perform strongly throughout much of the extratropics. Correlation between downscaled and observed monthly-mean precipitation is as high as 0.8–0.9 in many parts of Europe, North America, and Australia. For these regions, MOS clearly outperforms PP methods that use temperature and circulation as predictors. The strong performance of MOS makes such an approach to downscaling attractive and potentially applicable to climate change simulations.


2011 ◽  
Vol 12 (4) ◽  
pp. 556-578 ◽  
Author(s):  
Stefan Hagemann ◽  
Cui Chen ◽  
Jan O. Haerter ◽  
Jens Heinke ◽  
Dieter Gerten ◽  
...  

Abstract Future climate model scenarios depend crucially on the models’ adequate representation of the hydrological cycle. Within the EU integrated project Water and Global Change (WATCH), special care is taken to use state-of-the-art climate model output for impacts assessments with a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, given the systematic errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed for correcting climate model output to produce long-term time series with a statistical intensity distribution close to that of the observations. As observations, global reanalyzed daily data of precipitation and temperature were used that were obtained in the WATCH project. Daily time series from three GCMs (GCMs) ECHAM5/Max Planck Institute Ocean Model (MPI-OM), Centre National de Recherches Météorologiques Coupled GCM, version 3 (CNRM-CM3), and the atmospheric component of the L’Institut Pierre-Simon Laplace Coupled Model, version 4 (IPSL CM4) coupled model (called LMDZ-4)—were bias corrected. After the validation of the bias-corrected data, the original and the bias-corrected GCM data were used to force two global hydrology models (GHMs): 1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the simplified land surface (SL) scheme and the hydrological discharge (HD) model, and 2) the dynamic global vegetation model called LPJmL. The impact of the bias correction on the projected simulated hydrological changes is analyzed, and the simulation results of the two GHMs are compared. Here, the projected changes in 2071–2100 are considered relative to 1961–90. It is shown for both GHMs that the usage of bias-corrected GCM data leads to an improved simulation of river runoff for most catchments. But it is also found that the bias correction has an impact on the climate change signal for specific locations and months, thereby identifying another level of uncertainty in the modeling chain from the GCM to the simulated changes calculated by the GHMs. This uncertainty may be of the same order of magnitude as uncertainty related to the choice of the GCM or GHM. Note that this uncertainty is primarily attached to the GCM and only becomes obvious by applying the statistical bias correction methodology.


Sign in / Sign up

Export Citation Format

Share Document