scholarly journals Assessing climate change impacts on daily streamflow in California: the utility of daily large-scale climate data

2010 ◽  
Vol 7 (1) ◽  
pp. 1209-1243 ◽  
Author(s):  
E. P. Maurer ◽  
H. G. Hidalgo ◽  
T. Das ◽  
M. D. Dettinger ◽  
D. R. Cayan

Abstract. Three statistical downscaling methods were applied to NCEP/NCAR reanalysis (used as a surrogate for the best possible general circulation model), and the downscaled meteorology was used to drive a hydrology model over California. The historic record was divided into an "observed" period of 1950–1976 to provide the basis for downscaling, and a "projected" period of 1977–1999 for assessing skill. The downscaling methods included a bias-correction/spatial downscaling method (BCSD), which relies solely on monthly large scale meteorology and resamples the historical record to obtain daily sequences, a constructed analogues approach (CA), which uses daily large-scale anomalies, and a hybrid method (BCCA) using a quantile-mapping bias correction on the large-scale data prior to the CA approach. At 11 sites we compared three simulated daily flow statistics: streamflow timing, 3-day peak flow, and 7-day low flow. While all downscaling methods produced reasonable streamflow statistics at most locations, the BCCA method consistently outperformed the other methods, capturing the daily large-scale skill and translating it to simulated streamflows that more skillfully reproduced observationally-driven streamflows.

2010 ◽  
Vol 14 (6) ◽  
pp. 1125-1138 ◽  
Author(s):  
E. P. Maurer ◽  
H. G. Hidalgo ◽  
T. Das ◽  
M. D. Dettinger ◽  
D. R. Cayan

Abstract. Three statistical downscaling methods were applied to NCEP/NCAR reanalysis (used as a surrogate for the best possible general circulation model), and the downscaled meteorology was used to drive a hydrologic model over California. The historic record was divided into an "observed" period of 1950–1976 to provide the basis for downscaling, and a "projected" period of 1977–1999 for assessing skill. The downscaling methods included a bias-correction/spatial downscaling method (BCSD), which relies solely on monthly large scale meteorology and resamples the historical record to obtain daily sequences, a constructed analogues approach (CA), which uses daily large-scale anomalies, and a hybrid method (BCCA) using a quantile-mapping bias correction on the large-scale data prior to the CA approach. At 11 sites we compared three simulated daily flow statistics: streamflow timing, 3-day peak flow, and 7-day low flow. While all downscaling methods produced reasonable streamflow statistics at most locations, the BCCA method consistently outperformed the other methods, capturing the daily large-scale skill and translating it to simulated streamflows that more skillfully reproduced observationally-driven streamflows.


2014 ◽  
Vol 5 (4) ◽  
pp. 610-624 ◽  
Author(s):  
Sara Nazif ◽  
Mohammad Karamouz

Recent investigations have demonstrated scientists' consensus on the increase in global mean temperature and climate variability. These changes alter the hydro-climatic condition of regions. Investigation of surface water changes is an important issue in water resources planning as well as for the operation of reservoirs. In this study a data-based mechanistic (DBM) model has been used for daily streamflow simulation. This model is a data-driven statistical base simulation model that can take advantage of additional climate variables with time variable configurations. The model has been developed for simulation of streamflow to three reservoirs, located in central Iran, using the daily rainfall, temperature and streamflow data. Comparison of the DBM results with the autoregressive integrated moving average model, as an alternative model, shows its higher performance. To include climate change impacts in study, an artificial neural network-based statistical downscaling model is developed for rainfall and temperature downscaling. The downscaled temperature and rainfall data under climate change scenarios based on HadCM3 general circulation model outputs are used to evaluate the climate change impacts on streamflow for the 2000–2050 time horizon. The results demonstrate the considerable impact of climate change on streamflow variability with significantly different behaviour in the three adjacent basins.


2012 ◽  
Vol 9 (4) ◽  
pp. 4869-4918 ◽  
Author(s):  
S. Samadi ◽  
G. J. Carbone ◽  
M. Mahdavi ◽  
F. Sharifi ◽  
M. R. Bihamta

Abstract. Linear and non-linear statistical 'downscaling' study is done to relate large-scale climate information from a general circulation model (GCM) to local-scale river flows in west Iran. This study aims to investigate and evaluate the more promising downscaling techniques, and provides a through inter comparison study using the Karkheh catchment as an experimental site in a semi arid region for the years of 2040 to 2069. A hybrid conceptual hydrological model was used in conjunction with modeled outcomes from a General Circulation Model (GCM), HadCM3, along with two downscaling techniques, Statistical Downscaling Model (SDSM) and Artificial Neural Network (ANN), to determine how future streamflow may change in a semi arid catchment. The results show that the choice of a downscaling algorithm having a significant impact on the streamflow estimations for a semi-arid catchment, which are mainly, influenced, respectively, by atmospheric precipitation and temperature projections. According to the SDSM and ANN projections, daily temperature will increase up to +0.58° (+3.90%) and +0.48° (+3.48%) and daily precipitation will decrease up to −0.1mm (−2.56%) and −0.4 mm (−2.82%) respectively. Moreover streamflow changes corresponding to downscaled future projections presented a reduction in mean annual flow of −3.7 m3 s−1 and −9.47 m3 s−1 using SDSM and ANN outputs respectively. The results suggest a significant decrease of streamflow in both downscaling projections, particularly in winter. The discussion considers the performance of each statistical method for downscaling future flow at catchment scale as well as the relationship between atmospheric processes and flow variability and changes.


2013 ◽  
Vol 4 (1) ◽  
pp. 17-37 ◽  
Author(s):  
Haregewoin Haile Chernet ◽  
Knut Alfredsen ◽  
Ånund Killingtveit

Hydropower is the most important renewable energy source for electricity in Norway. However, it is the most vulnerable resource to climate change. Despite the importance of hydropower and its vulnerability to climate change, many studies have been mostly concerned with large-scale resources assessment. This study aims to address the climate change impacts on the scale of a single hydropower system in Norway. The impact studies are based on a combination of hydrological model and a hydropower simulation model driven by scenarios from the Atmospheric-Ocean General Circulation Model (AOGCM). These climate scenarios were used for driving the HBV (Hydrologiska Byråns Vattenbalansavdelning) hydrological model to provide inflow scenarios for the hydropower study. The nMAG hydropower simulation model was used to simulate the hydropower system for the control and scenario period and to investigate future changes in power production. In general, the projections indicate an average increase of 11–17% in annual inflow to the system, earlier peaks and a larger increase in spring. The hydropower simulation results show an increase in energy generation of 9–20% under the current reservoir operation strategies.


2013 ◽  
Vol 17 (6) ◽  
pp. 2147-2159 ◽  
Author(s):  
E. P. Maurer ◽  
T. Das ◽  
D. R. Cayan

Abstract. When correcting for biases in general circulation model (GCM) output, for example when statistically downscaling for regional and local impacts studies, a common assumption is that the GCM biases can be characterized by comparing model simulations and observations for a historical period. We demonstrate some complications in this assumption, with GCM biases varying between mean and extreme values and for different sets of historical years. Daily precipitation and maximum and minimum temperature from late 20th century simulations by four GCMs over the United States were compared to gridded observations. Using random years from the historical record we select a "base" set and a 10 yr independent "projected" set. We compare differences in biases between these sets at median and extreme percentiles. On average a base set with as few as 4 randomly-selected years is often adequate to characterize the biases in daily GCM precipitation and temperature, at both median and extreme values; 12 yr provided higher confidence that bias correction would be successful. This suggests that some of the GCM bias is time invariant. When characterizing bias with a set of consecutive years, the set must be long enough to accommodate regional low frequency variability, since the bias also exhibits this variability. Newer climate models included in the Intergovernmental Panel on Climate Change fifth assessment will allow extending this study for a longer observational period and to finer scales.


Ocean Science ◽  
2012 ◽  
Vol 8 (2) ◽  
pp. 143-159 ◽  
Author(s):  
S. Cailleau ◽  
J. Chanut ◽  
J.-M. Lellouche ◽  
B. Levier ◽  
C. Maraldi ◽  
...  

Abstract. The regional ocean operational system remains a key element in downscaling from large scale (global or basin scale) systems to coastal ones. It enables the transition between systems in which the resolution and the resolved physics are quite different. Indeed, coastal applications need a system to predict local high frequency events (inferior to the day) such as storm surges, while deep sea applications need a system to predict large scale lower frequency ocean features. In the framework of the ECOOP project, a regional system for the Iberia-Biscay-Ireland area has been upgraded from an existing V0 version to a V2. This paper focuses on the improvements from the V1 system, for which the physics are close to a large scale basin system, to the V2 for which the physics are more adapted to shelf and coastal issues. Strong developments such as higher regional physics resolution in the NEMO Ocean General Circulation Model for tides, non linear free surface and adapted vertical mixing schemes among others have been implemented in the V2 version. Thus, regional thermal fronts due to tidal mixing now appear in the latest version solution and are quite well positioned. Moreover, simulation of the stratification in shelf areas is also improved in the V2.


2018 ◽  
Vol 22 (10) ◽  
pp. 1-22 ◽  
Author(s):  
Andrew R. Bock ◽  
Lauren E. Hay ◽  
Gregory J. McCabe ◽  
Steven L. Markstrom ◽  
R. Dwight Atkinson

Abstract The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.


2007 ◽  
Vol 4 (5) ◽  
pp. 3413-3440 ◽  
Author(s):  
E. P. Maurer ◽  
H. G. Hidalgo

Abstract. Downscaling of climate model data is essential to most impact analysis. We compare two methods of statistical downscaling to produce continuous, gridded time series of precipitation and surface air temperature at a 1/8-degree (approximately 140 km² per grid cell) resolution over the western U.S. We use NCEP/NCAR Reanalysis data from 1950–1999 as a surrogate General Circulation Model (GCM). The two methods included are constructed analogues (CA) and a bias correction and spatial downscaling (BCSD), both of which have been shown to be skillful in different settings, and BCSD has been used extensively in hydrologic impact analysis. Both methods use the coarse scale Reanalysis fields of precipitation and temperature as predictors of the corresponding fine scale fields. CA downscales daily large-scale data directly and BCSD downscales monthly data, with a random resampling technique to generate daily values. The methods produce comparable skill in producing downscaled, gridded fields of precipitation and temperatures at a monthly and seasonal level. For daily precipitation, both methods exhibit some skill in reproducing both observed wet and dry extremes and the difference between the methods is not significant, reflecting the general low skill in daily precipitation variability in the reanalysis data. For low temperature extremes, the CA method produces greater downscaling skill than BCSD for fall and winter seasons. For high temperature extremes, CA demonstrates higher skill than BCSD in summer. We find that the choice of most appropriate downscaling technique depends on the variables, seasons, and regions of interest, on the availability of daily data, and whether the day to day correspondence of weather from the GCM needs to be reproduced for some applications. The ability to produce skillful downscaled daily data depends primarily on the ability of the climate model to show daily skill.


2006 ◽  
Vol 24 (8) ◽  
pp. 2075-2089 ◽  
Author(s):  
A. Chakraborty ◽  
R. S. Nanjundiah ◽  
J. Srinivasan

Abstract. A theory is proposed to determine the onset of the Indian Summer Monsoon (ISM) in an Atmospheric General Circulation Model (AGCM). The onset of ISM is delayed substantially in the absence of global orography. The impact of orography over different parts of the Earth on the onset of ISM has also been investigated using five additional perturbed simulations. The large difference in the date of onset of ISM in these simulations has been explained by a new theory based on the Surface Moist Static Energy (SMSE) and vertical velocity at the mid-troposphere. It is found that onset occurs only after SMSE crosses a threshold value and the large-scale vertical motion in the middle troposphere becomes upward. This study shows that both dynamics and thermodynamics play profound roles in the onset of the monsoon.


Sign in / Sign up

Export Citation Format

Share Document