scholarly journals Effects of record length and resolution on the derived distribution of annual precipitation

2015 ◽  
Vol 12 (12) ◽  
pp. 12987-13018
Author(s):  
C. I. Meier ◽  
J. S. Moraga ◽  
G. Pranzini ◽  
P. Molnar

Abstract. Traditional frequency analysis of annual precipitation requires the fitting of a probability model to yearly precipitation totals. There are three potential problems with this approach: a long record (at least 25 ~ 30 years) is required in order to fit the model, years with missing data cannot be used, and the data need to be homogeneous. To overcome these limitations, we test an alternative methodology proposed by Eagleson (1978), based on the derived distribution approach (DDA). This allows for better estimation of the probability density function (pdf) of annual rainfall without requiring long records, provided that high-resolution precipitation data are available to derive external storm properties. The DDA combines marginal pdfs for storm depth and inter-arrival time to arrive at an analytical formulation of the distribution of annual precipitation under the assumption of independence between events. We tested the DDA at two temperate locations in different climates (Concepción, Chile, and Lugano, Switzerland), quantifying the effects of record length. Our results show that, as compared to the fitting of a normal or log-normal distribution, the DDA significantly reduces the uncertainty in annual precipitation estimates (especially interannual variability) when only short records are available. The DDA also reduces the bias in annual precipitation quantiles with high return periods. We also show that using precipitation data aggregated every 24 h, as commonly available at most weather stations, introduces a noticeable bias in the DDA. Our results point to the tangible benefits of installing high-resolution (hourly or less) precipitation gauges at previously ungauged locations. We show that the DDA, in combination with high resolution gauging, provides more accurate and less uncertain estimates of long-term precipitation statistics such as interannual variability and quantiles of annual precipitation with high return periods even for records as short as 5 years.

2016 ◽  
Vol 20 (10) ◽  
pp. 4177-4190 ◽  
Author(s):  
Claudio I. Meier ◽  
Jorge Sebastián Moraga ◽  
Geri Pranzini ◽  
Peter Molnar

Abstract. Interannual variability of precipitation is traditionally described by fitting a probability model to yearly precipitation totals. There are three potential problems with this approach: a long record (at least 25–30 years) is required in order to fit the model, years with missing rainfall data cannot be used, and the data need to be homogeneous, i.e., one has to assume stationarity. To overcome some of these limitations, we test an alternative methodology proposed by Eagleson (1978), based on the derived distribution (DD) approach. It allows estimation of the probability density function (pdf) of annual rainfall without requiring long records, provided that continuously gauged precipitation data are available to derive external storm properties. The DD approach combines marginal pdfs for storm depths and inter-arrival times to obtain an analytical formulation of the distribution of annual precipitation, under the simplifying assumptions of independence between events and independence between storm depth and time to the next storm. Because it is based on information about storms and not on annual totals, the DD can make use of information from years with incomplete data; more importantly, only a few years of rainfall measurements should suffice to estimate the parameters of the marginal pdfs, at least at locations where it rains with some regularity. For two temperate locations in different climates (Concepción, Chile, and Lugano, Switzerland), we randomly resample shortened time series to evaluate in detail the effects of record length on the DD, comparing the results with the traditional approach of fitting a normal (or lognormal) distribution. Then, at the same two stations, we assess the biases introduced in the DD when using daily totalized rainfall, instead of continuously gauged data. Finally, for randomly selected periods between 3 and 15 years in length, we conduct full blind tests at 52 high-quality gauging stations in Switzerland, analyzing the ability of the DD to estimate the long-term standard deviation of annual rainfall, as compared to direct computation from the sample of annual totals. Our results show that, as compared to the fitting of a normal or lognormal distribution (or equivalently, direct estimation of the sample moments), the DD approach reduces the uncertainty in annual precipitation estimates (especially interannual variability) when only short records (below 6–8 years) are available. In such cases, it also reduces the bias in annual precipitation quantiles with high return periods. We demonstrate that using precipitation data aggregated every 24 h, as commonly available at most weather stations, introduces a noticeable bias in the DD. These results point to the tangible benefits of installing high-resolution (hourly, at least) precipitation gauges, next to the customary, manual rain-measuring instrument, at previously ungauged locations. We propose that the DD approach is a suitable tool for the statistical description and study of annual rainfall, not only when short records are available, but also when dealing with nonstationary time series of precipitation. Finally, to avert any misinterpretation of the presented method, we should like to emphasize that it only applies for climatic analyses of annual precipitation totals; even though storm data are used, there is no relation to the study of extreme rainfall intensities needed for engineering design.


2021 ◽  
Vol 10 (7) ◽  
pp. e9810715816
Author(s):  
Raimundo Mainar de Medeiros ◽  
Moacyr Cunha Filho ◽  
Victor Casimiro Piscoya ◽  
Renisson Nepoceno de Araújo Filho ◽  
Manoel Vieira de França ◽  
...  

Precipitation is of extreme importance for the management of water resources, since it is a question of degraded areas and with deforestation for deforestation and the withdrawal of firewood. Frequency analysis was performed on the annual rainfall totals by drawing up the graphs. The scale proposed by Meis et al. (1981), and by the meteorological and CPTEC/INPE nuclei, and provided by Xavier et al. (2005), the annual values ​​that approached the mean value were characterized as intermediaries, and in the scale of annual precipitation values, those that moved away 25% away from the average were considered as very rainy years, and below 25%, like dry years. The application of the Student's t test of significance, pointed out that precipitation data in general are 99% significant. The results showed a tendency of reductions in the rainfall indices, with oscillations of precipitations throughout the series 1962 to 2019, and evidenced the recurrence of maximum values ​​of annual precipitation in the range of 17, 13 and 9 years. It is suggested a study with series of larger years in order to verify the fluctuations and the contributions of the El Niño(a) phenomena in the studied area.


2020 ◽  
Vol 59 (9) ◽  
pp. 1519-1536
Author(s):  
Giuseppe Mascaro

AbstractIntensity–duration–frequency (IDF) analyses of rainfall extremes provide critical information to mitigate, manage, and adapt to urban flooding. The accuracy and uncertainty of IDF analyses depend on the availability of historical rainfall records, which are more accessible at daily resolution and, quite often, are very sparse in developing countries. In this work, we quantify performances of different IDF models as a function of the number of available high-resolution (Nτ) and daily (N24h) rain gauges. For this aim, we apply a cross-validation framework that is based on Monte Carlo bootstrapping experiments on records of 223 high-resolution gauges in central Arizona. We test five IDF models based on (two) local, (one) regional, and (two) scaling frequency analyses of annual rainfall maxima from 30-min to 24-h durations with the generalized extreme value (GEV) distribution. All models exhibit similar performances in simulating observed quantiles associated with return periods up to 30 years. When Nτ > 10, local and regional models have the best accuracy; bias correcting the GEV shape parameter for record length is recommended to estimate quantiles for large return periods. The uncertainty of all models, evaluated via Monte Carlo experiments, is very large when Nτ ≤ 5; however, if N24h ≥ 10 additional daily gauges are available, the uncertainty is greatly reduced and accuracy is increased by applying simple scaling models, which infer estimates on subdaily rainfall statistics from information at daily scale. For all models, performances depend on the ability to capture the elevation control on their parameters. Although our work is site specific, its results provide insights to conduct future IDF analyses, especially in regions with sparse data.


Author(s):  
David K. Wright ◽  
Lance S. Glasgow ◽  
Ward W. McCaughey ◽  
Elaine K. Sutherland

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Rui Ito ◽  
Tosiyuki Nakaegawa ◽  
Izuru Takayabu

AbstractEnsembles of climate change projections created by general circulation models (GCMs) with high resolution are increasingly needed to develop adaptation strategies for regional climate change. The Meteorological Research Institute atmospheric GCM version 3.2 (MRI-AGCM3.2), which is listed in the Coupled Model Intercomparison Project phase 5 (CMIP5), has been typically run with resolutions of 60 km and 20 km. Ensembles of MRI-AGCM3.2 consist of members with multiple cumulus convection schemes and different patterns of future sea surface temperature, and are utilized together with their downscaled data; however, the limited size of the high-resolution ensemble may lead to undesirable biases and uncertainty in future climate projections that will limit its appropriateness and effectiveness for studies on climate change and impact assessments. In this study, to develop a comprehensive understanding of the regional precipitation simulated with MRI-AGCM3.2, we investigate how well MRI-AGCM3.2 simulates the present-day regional precipitation around the globe and compare the uncertainty in future precipitation changes and the change projection itself between MRI-AGCM3.2 and the CMIP5 multiple atmosphere–ocean coupled GCM (AOGCM) ensemble. MRI-AGCM3.2 reduces the bias of the regional mean precipitation obtained with the high-performing CMIP5 models, with a reduction of approximately 20% in the bias over the Tibetan Plateau through East Asia and Australia. When 26 global land regions are considered, MRI-AGCM3.2 simulates the spatial pattern and the regional mean realistically in more regions than the individual CMIP5 models. As for the future projections, in 20 of the 26 regions, the sign of annual precipitation change is identical between the 50th percentiles of the MRI-AGCM3.2 ensemble and the CMIP5 multi-model ensemble. In the other six regions around the tropical South Pacific, the differences in modeling with and without atmosphere–ocean coupling may affect the projections. The uncertainty in future changes in annual precipitation from MRI-AGCM3.2 partially overlaps the maximum–minimum uncertainty range from the full ensemble of the CMIP5 models in all regions. Moreover, on average over individual regions, the projections from MRI-AGCM3.2 spread over roughly 0.8 of the uncertainty range from the high-performing CMIP5 models compared to 0.4 of the range of the full ensemble.


Water ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 2092
Author(s):  
Songbai Song ◽  
Yan Kang ◽  
Xiaoyan Song ◽  
Vijay P. Singh

The choice of a probability distribution function and confidence interval of estimated design values have long been of interest in flood frequency analysis. Although the four-parameter exponential gamma (FPEG) distribution has been developed for application in hydrology, its maximum likelihood estimation (MLE)-based parameter estimation method and asymptotic variance of its quantiles have not been well documented. In this study, the MLE method was used to estimate the parameters and confidence intervals of quantiles of the FPEG distribution. This method entails parameter estimation and asymptotic variances of quantile estimators. The parameter estimation consisted of a set of four equations which, after algebraic simplification, were solved using a three dimensional Levenberg-Marquardt algorithm. Based on sample information matrix and Fisher’s expected information matrix, derivatives of the design quantile with respect to the parameters were derived. The method of estimation was applied to annual precipitation data from the Weihe watershed, China and confidence intervals for quantiles were determined. Results showed that the FPEG was a good candidate to model annual precipitation data and can provide guidance for estimating design values


2021 ◽  
Vol 13 (11) ◽  
pp. 2040
Author(s):  
Xin Yan ◽  
Hua Chen ◽  
Bingru Tian ◽  
Sheng Sheng ◽  
Jinxing Wang ◽  
...  

High-spatial-resolution precipitation data are of great significance in many applications, such as ecology, hydrology, and meteorology. Acquiring high-precision and high-resolution precipitation data in a large area is still a great challenge. In this study, a downscaling–merging scheme based on random forest and cokriging is presented to solve this problem. First, the enhanced decision tree model, which is based on random forest from machine learning algorithms, is used to reduce the spatial resolution of satellite daily precipitation data to 0.01°. The downscaled satellite-based daily precipitation is then merged with gauge observations using the cokriging method. The scheme is applied to downscale the Global Precipitation Measurement Mission (GPM) daily precipitation product over the upstream part of the Hanjiang Basin. The experimental results indicate that (1) the downscaling model based on random forest can correctly spatially downscale the GPM daily precipitation data, which retains the accuracy of the original GPM data and greatly improves their spatial details; (2) the GPM precipitation data can be downscaled on the seasonal scale; and (3) the merging method based on cokriging greatly improves the accuracy of the downscaled GPM daily precipitation data. This study provides an efficient scheme for generating high-resolution and high-quality daily precipitation data in a large area.


Sign in / Sign up

Export Citation Format

Share Document