scholarly journals On the importance of appropriate rain-gauge catch correction for hydrological modelling at mid to high latitudes

2012 ◽  
Vol 9 (3) ◽  
pp. 3607-3655 ◽  
Author(s):  
S. Stisen ◽  
A. L. Højberg ◽  
L. Troldborg ◽  
J. C. Refsgaard ◽  
B. S. B. Christensen ◽  
...  

Abstract. An existing rain gauge catch correction method addressing solid and liquid precipitation was applied both as monthly mean correction factors based on a 30 yr climatology (standard correction) and as daily correction factors based on daily observations of wind speed and temperature (dynamic correction). The two methods resulted in different winter precipitation rates for the period 1990–2010. The resulting precipitation data sets were evaluated through the comprehensive Danish National Water Resources model (DK-Model) revealing major differences in both model performance and optimized model parameter sets. Simulated stream discharge is improved significantly when introducing a dynamic precipitation correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimized model parameters are much more physically plausible for the model based on dynamic correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the dynamic correction. Similarly, the performances of the dynamic correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that dynamic precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes especially in coastal climates where winter precipitation type (solid/liquid) fluctuate significantly causing climatological mean correction factors to be inadequate.

2012 ◽  
Vol 16 (11) ◽  
pp. 4157-4176 ◽  
Author(s):  
S. Stisen ◽  
A. L. Højberg ◽  
L. Troldborg ◽  
J. C. Refsgaard ◽  
B. S. B. Christensen ◽  
...  

Abstract. Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time–space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990–2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.


2021 ◽  
Vol 13 (15) ◽  
pp. 2922
Author(s):  
Yang Song ◽  
Patrick D. Broxton ◽  
Mohammad Reza Ehsani ◽  
Ali Behrangi

The combination of snowfall, snow water equivalent (SWE), and precipitation rate measurements from 39 snow telemetry (SNOTEL) sites in Alaska were used to assess the performance of various precipitation products from satellites, reanalysis, and rain gauges. Observation of precipitation from two water years (2018–2019) of a high-resolution radar/rain gauge data (Stage IV) product was also utilized to give insights into the scaling differences between various products. The outcomes were used to assess two popular methods for rain gauge undercatch correction. It was found that SWE and precipitation measurements at SNOTELs, as well as precipitation estimates based on Stage IV data, are generally consistent and can provide a range within which other products can be assessed. The time-series of snowfall and SWE accumulation suggests that most of the products can capture snowfall events; however, differences exist in their accumulation. Reanalysis products tended to overestimate snow accumulation in the study area, while the current combined passive microwave remote sensing products (i.e., IMERG-HQ) underestimate snowfall accumulation. We found that correction factors applied to rain gauges are effective for improving their undercatch, especially for snowfall. However, no improvement in correlation is seen when correction factors are applied, and rainfall is still estimated better than snowfall. Even though IMERG-HQ has less skill for capturing snowfall than rainfall, analysis using Taylor plots showed that the combined microwave product does have skill for capturing the geographical distribution of snowfall and precipitation accumulation; therefore, bias adjustment might lead to reasonable precipitation estimates. This study demonstrates that other snow properties (e.g., SWE accumulation at the SNOTEL sites) can complement precipitation data to estimate snowfall. In the future, gridded SWE and snow depth data from GlobSnow and Sentinel-1 can be used to assess snowfall and its distribution over broader regions.


2021 ◽  
Vol 13 (12) ◽  
pp. 2405
Author(s):  
Fengyang Long ◽  
Chengfa Gao ◽  
Yuxiang Yan ◽  
Jinling Wang

Precise modeling of weighted mean temperature (Tm) is critical for realizing real-time conversion from zenith wet delay (ZWD) to precipitation water vapor (PWV) in Global Navigation Satellite System (GNSS) meteorology applications. The empirical Tm models developed by neural network techniques have been proved to have better performances on the global scale; they also have fewer model parameters and are thus easy to operate. This paper aims to further deepen the research of Tm modeling with the neural network, and expand the application scope of Tm models and provide global users with more solutions for the real-time acquisition of Tm. An enhanced neural network Tm model (ENNTm) has been developed with the radiosonde data distributed globally. Compared with other empirical models, the ENNTm has some advanced features in both model design and model performance, Firstly, the data for modeling cover the whole troposphere rather than just near the Earth’s surface; secondly, the ensemble learning was employed to weaken the impact of sample disturbance on model performance and elaborate data preprocessing, including up-sampling and down-sampling, which was adopted to achieve better model performance on the global scale; furthermore, the ENNTm was designed to meet the requirements of three different application conditions by providing three sets of model parameters, i.e., Tm estimating without measured meteorological elements, Tm estimating with only measured temperature and Tm estimating with both measured temperature and water vapor pressure. The validation work is carried out by using the radiosonde data of global distribution, and results show that the ENNTm has better performance compared with other competing models from different perspectives under the same application conditions, the proposed model expanded the application scope of Tm estimation and provided the global users with more choices in the applications of real-time GNSS-PWV retrival.


2013 ◽  
Vol 9 (S298) ◽  
pp. 404-404
Author(s):  
Cuihua Du ◽  
Yunpeng Jia ◽  
Xiyan Peng

AbstractBased on the South Galactic Cap U-band Sky Survey (SCUSS) and SDSS observation, we adopted the star-count method to analyze the stellar distribution in different directions of the Galaxy. We find that these model parameters may be variable with observed direction, which cannot simply be attributed to statistical errors.


2010 ◽  
Vol 11 (3) ◽  
pp. 781-796 ◽  
Author(s):  
Jonathan J. Gourley ◽  
Scott E. Giangrande ◽  
Yang Hong ◽  
Zachary L. Flamig ◽  
Terry Schuur ◽  
...  

Abstract Rainfall estimated from the polarimetric prototype of the Weather Surveillance Radar-1988 Doppler [WSR-88D (KOUN)] was evaluated using a dense Micronet rain gauge network for nine events on the Ft. Cobb research watershed in Oklahoma. The operation of KOUN and its upgrade to dual polarization was completed by the National Severe Storms Laboratory. Storm events included an extreme rainfall case from Tropical Storm Erin that had a 100-yr return interval. Comparisons with collocated Micronet rain gauge measurements indicated all six rainfall algorithms that used polarimetric observations had lower root-mean-squared errors and higher Pearson correlation coefficients than the conventional algorithm that used reflectivity factor alone when considering all events combined. The reflectivity based relation R(Z) was the least biased with an event-combined normalized bias of −9%. The bias for R(Z), however, was found to vary significantly from case to case and as a function of rainfall intensity. This variability was attributed to different drop size distributions (DSDs) and the presence of hail. The synthetic polarimetric algorithm R(syn) had a large normalized bias of −31%, but this bias was found to be stationary. To evaluate whether polarimetric radar observations improve discharge simulation, recent advances in Markov Chain Monte Carlo simulation using the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) were used. This Bayesian approach infers the posterior probability density function of model parameters and output predictions, which allows us to quantify HL-RDHM uncertainty. Hydrologic simulations were compared to observed streamflow and also to simulations forced by rain gauge inputs. The hydrologic evaluation indicated that all polarimetric rainfall estimators outperformed the conventional R(Z) algorithm, but only after their long-term biases were identified and corrected.


Author(s):  
Stephen A Solovitz

Abstract Following volcanic eruptions, forecasters need accurate estimates of mass eruption rate (MER) to appropriately predict the downstream effects. Most analyses use simple correlations or models based on large eruptions at steady conditions, even though many volcanoes feature significant unsteadiness. To address this, a superposition model is developed based on a technique used for spray injection applications, which predicts plume height as a function of the time-varying exit velocity. This model can be inverted, providing estimates of MER using field observations of a plume. The model parameters are optimized using laboratory data for plumes with physically-relevant exit profiles and Reynolds numbers, resulting in predictions that agree to within 10% of measured exit velocities. The model performance is examined using a historic eruption from Stromboli with well-documented unsteadiness, again providing MER estimates of the correct order of magnitude. This method can provide a rapid alternative for real-time forecasting of small, unsteady eruptions.


2018 ◽  
Vol 22 (8) ◽  
pp. 4565-4581 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes has led to the formulation of the multiple hypotheses approach in hydrological modeling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step by step. This study tackles the problem the other way around: we start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash–Sutcliffe; percentage bias, PBIAS; and the ratio between the root mean square error and the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 and the final Model 15 with HBV Light. The final model is not able to outperform HBV Light, but we find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and fewer model parameters.


2014 ◽  
Vol 14 (23) ◽  
pp. 32233-32323 ◽  
Author(s):  
M. Bocquet ◽  
H. Elbern ◽  
H. Eskes ◽  
M. Hirtl ◽  
R. Žabkar ◽  
...  

Abstract. Data assimilation is used in atmospheric chemistry models to improve air quality forecasts, construct re-analyses of three-dimensional chemical (including aerosol) concentrations and perform inverse modeling of input variables or model parameters (e.g., emissions). Coupled chemistry meteorology models (CCMM) are atmospheric chemistry models that simulate meteorological processes and chemical transformations jointly. They offer the possibility to assimilate both meteorological and chemical data; however, because CCMM are fairly recent, data assimilation in CCMM has been limited to date. We review here the current status of data assimilation in atmospheric chemistry models with a particular focus on future prospects for data assimilation in CCMM. We first review the methods available for data assimilation in atmospheric models, including variational methods, ensemble Kalman filters, and hybrid methods. Next, we review past applications that have included chemical data assimilation in chemical transport models (CTM) and in CCMM. Observational data sets available for chemical data assimilation are described, including surface data, surface-based remote sensing, airborne data, and satellite data. Several case studies of chemical data assimilation in CCMM are presented to highlight the benefits obtained by assimilating chemical data in CCMM. A case study of data assimilation to constrain emissions is also presented. There are few examples to date of joint meteorological and chemical data assimilation in CCMM and potential difficulties associated with data assimilation in CCMM are discussed. As the number of variables being assimilated increases, it is essential to characterize correctly the errors; in particular, the specification of error cross-correlations may be problematic. In some cases, offline diagnostics are necessary to ensure that data assimilation can truly improve model performance. However, the main challenge is likely to be the paucity of chemical data available for assimilation in CCMM.


2008 ◽  
Vol 5 (3) ◽  
pp. 1641-1675 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives an unique and very best parameter vector. The parameters of hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on the half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


Sign in / Sign up

Export Citation Format

Share Document