scholarly journals Statistical downscaling with the downscaleR package: Contribution to the VALUE intercomparison experiment

Author(s):  
Joaquín Bedia ◽  
Jorge Baño-Medina ◽  
Mikel N. Legasa ◽  
Maialen Iturbide ◽  
Rodrigo Manzanas ◽  
...  

Abstract. The increasing demand for high-resolution climate information has attracted a growing attention for statistical downscaling methods (SD), due in part to their relative advantages and merits as compared to dynamical approaches (based on regional climate model simulations), such as their much lower computational cost and their fitness-for-purpose for many local-scale applications. As a result, a plethora of SD methods is nowadays available for climate scientists, which has motivated recent efforts for their comprehensive evaluation, like the VALUE Project (http://www.value-cost.eu). The systematic intercomparison of a large number of SD techniques undertaken in VALUE, many of them independently developed by different authors and modeling centers in a variety of languages/environments, has shown a compelling need for new tools allowing for their application within an integrated framework. With this regard, downscaleR is an R package for statistical downscaling of climate information which covers the most popular approaches (Model Output Statistics – including the so called 'bias correction' methods – and Perfect Prognosis) and state-of-the-art techniques. It has been conceived to work primarily with daily data and can be used in the framework of both seasonal forecasting and climate change studies. Its full integration within the climate4R framework (Iturbide et al. 2019) makes possible the development of end-to-end downscaling applications, from data retrieval to model building, validation and prediction, bringing to climate scientists and practitioners a unique comprehensive framework for SD model development. In this article the main features of downscaleR are showcased through the replication of some of the results obtained in the VALUE Project, making an emphasis in the most technically complex stages of perfect-prog model calibration (predictor screening, cross-validation and model selection) that are accomplished through simple commands allowing for extremely flexible model tuning, tailored to the needs of users requiring an easy interface for different levels of experimental complexity. As part of the open-source climate4R framework, downscaleR is freely available and the necessary data and R scripts to fully replicate the experiments included in this paper are also provided as a companion notebook.

2020 ◽  
Vol 13 (3) ◽  
pp. 1711-1735 ◽  
Author(s):  
Joaquín Bedia ◽  
Jorge Baño-Medina ◽  
Mikel N. Legasa ◽  
Maialen Iturbide ◽  
Rodrigo Manzanas ◽  
...  

Abstract. The increasing demand for high-resolution climate information has attracted growing attention to statistical downscaling (SDS) methods, due in part to their relative advantages and merits as compared to dynamical approaches (based on regional climate model simulations), such as their much lower computational cost and their fitness for purpose for many local-scale applications. As a result, a plethora of SDS methods is nowadays available to climate scientists, which has motivated recent efforts for their comprehensive evaluation, like the VALUE initiative (http://www.value-cost.eu, last access: 29 March 2020). The systematic intercomparison of a large number of SDS techniques undertaken in VALUE, many of them independently developed by different authors and modeling centers in a variety of languages/environments, has shown a compelling need for new tools allowing for their application within an integrated framework. In this regard, downscaleR is an R package for statistical downscaling of climate information which covers the most popular approaches (model output statistics – including the so-called “bias correction” methods – and perfect prognosis) and state-of-the-art techniques. It has been conceived to work primarily with daily data and can be used in the framework of both seasonal forecasting and climate change studies. Its full integration within the climate4R framework (Iturbide et al., 2019) makes possible the development of end-to-end downscaling applications, from data retrieval to model building, validation, and prediction, bringing to climate scientists and practitioners a unique comprehensive framework for SDS model development. In this article the main features of downscaleR are showcased through the replication of some of the results obtained in VALUE, placing an emphasis on the most technically complex stages of perfect-prognosis model calibration (predictor screening, cross-validation, and model selection) that are accomplished through simple commands allowing for extremely flexible model tuning, tailored to the needs of users requiring an easy interface for different levels of experimental complexity. As part of the open-source climate4R framework, downscaleR is freely available and the necessary data and R scripts to fully replicate the experiments included in this paper are also provided as a companion notebook.


Author(s):  
María Laura Bettolli

Global climate models (GCM) are fundamental tools for weather forecasting and climate predictions at different time scales, from intraseasonal prediction to climate change projections. Their design allows GCMs to simulate the global climate adequately, but they are not able to skillfully simulate local/regional climates. Consequently, downscaling and bias correction methods are increasingly needed and applied for generating useful local and regional climate information from the coarse GCM resolution. Empirical-statistical downscaling (ESD) methods generate climate information at the local scale or with a greater resolution than that achieved by GCM by means of empirical or statistical relationships between large-scale atmospheric variables and the local observed climate. As a counterpart approach, dynamical downscaling is based on regional climate models that simulate regional climate processes with a greater spatial resolution, using GCM fields as initial or boundary conditions. Various ESD methods can be classified according to different criteria, depending on their approach, implementation, and application. In general terms, ESD methods can be categorized into subgroups that include transfer functions or regression models (either linear or nonlinear), weather generators, and weather typing methods and analogs. Although these methods can be grouped into different categories, they can also be combined to generate more sophisticated downscaling methods. In the last group, weather typing and analogs, the methods relate the occurrence of particular weather classes to local and regional weather conditions. In particular, the analog method is based on finding atmospheric states in the historical record that are similar to the atmospheric state on a given target day. Then, the corresponding historical local weather conditions are used to estimate local weather conditions on the target day. The analog method is a relatively simple technique that has been extensively used as a benchmark method in statistical downscaling applications. Of easy construction and applicability to any predictand variable, it has shown to perform as well as other more sophisticated methods. These attributes have inspired its application in diverse studies around the world that explore its ability to simulate different characteristics of regional climates.


2021 ◽  
Vol 12 (4) ◽  
pp. 1253-1273
Author(s):  
Yoann Robin ◽  
Mathieu Vrac

Abstract. Bias correction and statistical downscaling are now regularly applied to climate simulations to make then more usable for impact models and studies. Over the last few years, various methods were developed to account for multivariate – inter-site or inter-variable – properties in addition to more usual univariate ones. Among such methods, temporal properties are either neglected or specifically accounted for, i.e. differently from the other properties. In this study, we propose a new multivariate approach called “time-shifted multivariate bias correction” (TSMBC), which aims to correct the temporal dependency in addition to the other marginal and multivariate aspects. TSMBC relies on considering the initial variables at various times (i.e. lags) as additional variables to be corrected. Hence, temporal dependencies (e.g. auto-correlations) to be corrected are viewed as inter-variable dependencies to be adjusted and an existing multivariate bias correction (MBC) method can then be used to answer this need. This approach is first applied and evaluated on synthetic data from a vector auto-regressive (VAR) process. In a second evaluation, we work in a “perfect model” context where a regional climate model (RCM) plays the role of the (pseudo-)observations, and where its forcing global climate model (GCM) is the model to be downscaled or bias corrected. For both evaluations, the results show a large reduction of the biases in the temporal properties, while inter-variable and spatial dependence structures are still correctly adjusted. However, increasing the number of lags too much does not necessarily improve the temporal properties, and an overly strong increase in the number of dimensions of the dataset to be corrected can even imply some potential instability in the adjusted and/or downscaled results, calling for a reasoned use of this approach for large datasets.


2020 ◽  
Author(s):  
Thomas Frisius ◽  
Daniela Jacob ◽  
Armelle Reca Remedio ◽  
Kevin Sieck ◽  
Claas Teichmann

<p><span>Moving towards convection permitting simulations up to few kilometers scale are emerging solutions to the challenge and complexities in simulating different convective phenomena especially over mountainous regions. In this study we execute sensitivity experiments with the non-hydrostatic regional climate model REMO-NH at convection permitting resolution (~3km). We use this model in three setups where different parameterization schemes for horizontal diffusion are tested. In the first setup “DIFF2” we utilize the standard 2</span><sup><span>nd</span></sup><span> order diffusion while the second setup “DIFF4” applies 4</span><sup><span>th</span></sup><span> order diffusion. The higher order has a smaller impact on larger scales so that the atmospheric fields exhibit more details, especially in regions with high convective activity. In the third setup “TURB3D”, REMO-NH runs with a new 3D Smagorinsky-type turbulence scheme instead of the artificial diffusion schemes. Though turbulent horizontal diffusion is of second order in this setup, it incorporates a spatially and temporally varying exchange coefficient so that flows with little deformation remain unaffected. The domain of the simulations driven with EURO-CORDEX boundary data covers Germany and the time integration spans the year 2006. </span></p><p><span>Selected cases reveal a better representation of convective elements in DIFF4 and TURB3D when compared with DIFF2. We cannot compare these individual cases directly to observations since REMO-NH is not a reanalysis but a climate model. However, the spatial precipitation fields deduced from DWD radar data have characteristics which are more similar to DIFF4 and TURB3D than to DIFF2. More details are resolved in DIFF4 and TURB3D since the diffusion mainly act at the smallest spatial scales resolved by the model. DIFF2 smoothes convective activity drastically so that it appears in the form of unrealistically wide convective cells. On the other hand, the statistics of precipitation (seasonal average, standard deviation and 95th percentile) show a better agreement with observations in the simulation DIFF2 and TURB3D. TURB3D appears to be the best compromise regarding the simulation of precipitations fields. However, TURB3D exhibits a warm bias in the 2m temperature field in autumn and winter. Further model development may help to overcome this issue.</span></p>


Water ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 1102 ◽  
Author(s):  
Rishabh Gupta ◽  
Rabin Bhattarai ◽  
Ashok Mishra

The use of global and regional climate models has been increasing in the past few decades, in order to analyze the future of natural resources and the socio-economic aspects of climate change. However, these climate model outputs can be quite biased, which makes it challenging to use them directly for analysis purpose. Therefore, a tool named Climate Data Bias Corrector was developed to correct the bias in climatic projections of historical and future periods for three primary climatic variables—rainfall, temperature (maximum and minimum), and solar radiation. It uses the quantile mapping approach, known for its efficiency and low computational cost for bias correction. Its Graphical User Interface (GUI) was made to be feasible to take input and give output in commonly used file formats—comma and tab delimited file formats. It also generates month-wise cumulative density function (CDF) plot of a random station/grid to allow the user to investigate the effectiveness of correction statistically. The tool was verified with a case study on several agro-ecological zones of India and found to be efficient.


2010 ◽  
Vol 10 (1) ◽  
pp. 9-27 ◽  
Author(s):  
H. Mao ◽  
M. Chen ◽  
J. D. Hegarty ◽  
R. W. Talbot ◽  
J. P. Koermer ◽  
...  

Abstract. Regional air quality simulations were conducted for summers 2001–2005 in the eastern US and subjected to extensive evaluation using various ground and airborne measurements. A brief climate evaluation focused on transport by comparing modeled dominant map types with ones from reanalysis. Reasonable agreement was found for their frequency of occurrence and distinctness of circulation patterns. The two most frequent map types from reanalysis were the Bermuda High (22%) and passage of a Canadian cold frontal over the northeastern US (20%). The model captured their frequency of occurrence at 25% and 18% respectively. The simulated five average distributions of 1-h ozone (O3) daily maxima using the Community Multiscale Air Quality (CMAQ) modeling system reproduced salient features in observations. This suggests that the ability of the regional climate model to depict transport processes accurately is critical for reasonable simulations of surface O3. Comparison of mean bias, root mean square error, and index of agreement for CMAQ summer surface 8-h O3 daily maxima and observations showed –0.6±14 nmol/mol, 14 nmol/mol, and 71% respectively. CMAQ performed best in moderately polluted conditions and less satisfactorily in highly polluted ones. This highlights the common problem of overestimating/underestimating lower/higher modeled O3 levels. Diagnostic analysis suggested that significant overestimation of inland nighttime low O3 mixing ratios may be attributed to underestimates of nitric oxide (NO) emissions at night. The absence of the second daily peak in simulations for the Appledore Island marine site possibly resulted from coarse grid resolution misrepresentation of land surface type. Comparison with shipboard measurements suggested that CMAQ has an inherent problem of underpredicting O3 levels in continental outflow. Modeled O3 vertical profiles exhibited a lack of structure indicating that key processes missing from CMAQ, such as lightning produced NO and stratospheric intrusions, are important for accurate upper tropospheric representations.


Sign in / Sign up

Export Citation Format

Share Document