scholarly journals On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

2012 ◽  
Vol 16 (11) ◽  
pp. 4157-4176 ◽  
Author(s):  
S. Stisen ◽  
A. L. Højberg ◽  
L. Troldborg ◽  
J. C. Refsgaard ◽  
B. S. B. Christensen ◽  
...  

Abstract. Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time–space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990–2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.

2012 ◽  
Vol 9 (3) ◽  
pp. 3607-3655 ◽  
Author(s):  
S. Stisen ◽  
A. L. Højberg ◽  
L. Troldborg ◽  
J. C. Refsgaard ◽  
B. S. B. Christensen ◽  
...  

Abstract. An existing rain gauge catch correction method addressing solid and liquid precipitation was applied both as monthly mean correction factors based on a 30 yr climatology (standard correction) and as daily correction factors based on daily observations of wind speed and temperature (dynamic correction). The two methods resulted in different winter precipitation rates for the period 1990–2010. The resulting precipitation data sets were evaluated through the comprehensive Danish National Water Resources model (DK-Model) revealing major differences in both model performance and optimized model parameter sets. Simulated stream discharge is improved significantly when introducing a dynamic precipitation correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimized model parameters are much more physically plausible for the model based on dynamic correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the dynamic correction. Similarly, the performances of the dynamic correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that dynamic precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes especially in coastal climates where winter precipitation type (solid/liquid) fluctuate significantly causing climatological mean correction factors to be inadequate.


2015 ◽  
Vol 16 (6) ◽  
pp. 2345-2363 ◽  
Author(s):  
Steven M. Martinaitis ◽  
Stephen B. Cocks ◽  
Youcun Qi ◽  
Brian T. Kaney ◽  
Jian Zhang ◽  
...  

Abstract Precipitation gauge observations are routinely classified as ground truth and are utilized in the verification and calibration of radar-derived quantitative precipitation estimation (QPE). This study quantifies the challenges of utilizing automated hourly gauge networks to measure winter precipitation within the real-time Multi-Radar Multi-Sensor (MRMS) system from 1 October 2013 to 1 April 2014. Gauge observations were compared against gridded radar-derived QPE over the entire MRMS domain. Gauges that reported no precipitation were classified as potentially stuck in the MRMS system if collocated hourly QPE values indicated nonzero precipitation. The average number of potentially stuck gauge observations per hour doubled in environments defined by below-freezing surface wet-bulb temperatures, while the average number of observations when both the gauge and QPE reported precipitation decreased by 77%. Periods of significant winter precipitation impacts resulted in over a thousand stuck gauge observations, or over 10%–18% of all gauge observations across the MRMS domain, per hour. Partial winter impacts were observed prior to the gauges becoming stuck. Simultaneous postevent thaw and precipitation resulted in unreliable gauge values, which can introduce inaccurate bias correction factors when calibrating radar-derived QPE. The authors then describe a methodology to quality control (QC) gauge observations compromised by winter precipitation based on these results. A comparison of two gauge instrumentation types within the National Weather Service (NWS) Automated Surface Observing System (ASOS) network highlights the need for improved gauge instrumentation for more accurate liquid-equivalent values of winter precipitation.


2013 ◽  
Vol 9 (S298) ◽  
pp. 404-404
Author(s):  
Cuihua Du ◽  
Yunpeng Jia ◽  
Xiyan Peng

AbstractBased on the South Galactic Cap U-band Sky Survey (SCUSS) and SDSS observation, we adopted the star-count method to analyze the stellar distribution in different directions of the Galaxy. We find that these model parameters may be variable with observed direction, which cannot simply be attributed to statistical errors.


2021 ◽  
pp. 1-18
Author(s):  
Gisela Vanegas ◽  
John Nejedlik ◽  
Pascale Neff ◽  
Torsten Clemens

Summary Forecasting production from hydrocarbon fields is challenging because of the large number of uncertain model parameters and the multitude of observed data that are measured. The large number of model parameters leads to uncertainty in the production forecast from hydrocarbon fields. Changing operating conditions [e.g., implementation of improved oil recovery or enhanced oil recovery (EOR)] results in model parameters becoming sensitive in the forecast that were not sensitive during the production history. Hence, simulation approaches need to be able to address uncertainty in model parameters as well as conditioning numerical models to a multitude of different observed data. Sampling from distributions of various geological and dynamic parameters allows for the generation of an ensemble of numerical models that could be falsified using principal-component analysis (PCA) for different observed data. If the numerical models are not falsified, machine-learning (ML) approaches can be used to generate a large set of parameter combinations that can be conditioned to the different observed data. The data conditioning is followed by a final step ensuring that parameter interactions are covered. The methodology was applied to a sandstone oil reservoir with more than 70 years of production history containing dozens of wells. The resulting ensemble of numerical models is conditioned to all observed data. Furthermore, the resulting posterior-model parameter distributions are only modified from the prior-model parameter distributions if the observed data are informative for the model parameters. Hence, changes in operating conditions can be forecast under uncertainty, which is essential if nonsensitive parameters in the history are sensitive in the forecast.


Author(s):  
Roger C. von Doenhoff ◽  
Robert J. Streifel ◽  
Robert J. Marks

Abstract A model of the friction characteristics of carbon brakes is proposed to aid in the understanding of the causes of brake vibration. The model parameters are determined by a genetic algorithm in an attempt to identify differences in friction properties between brake applications during which vibration occurs and those during which there is no vibration. The model computes the brake torque as a function of wheelspeed, brake pressure, and the carbon surface temperature. The surface temperature is computed using a five node temperature model. The genetic algorithm chooses the model parameters to minimize the error between the model output and the torque measured during a dynamometer test. The basics of genetic algorithms and results of the model parameter identification process are presented.


Geophysics ◽  
2021 ◽  
pp. 1-37
Author(s):  
Xinhai Hu ◽  
Wei Guoqi ◽  
Jianyong Song ◽  
Zhifang Yang ◽  
Minghui Lu ◽  
...  

Coupling factors of sources and receivers vary dramatically due to the strong heterogeneity of near surface, which are as important as the model parameters for the inversion success. We propose a full waveform inversion (FWI) scheme that corrects for variable coupling factors while updating the model parameter. A linear inversion is embedded into the scheme to estimate the source and receiver factors and compute the amplitude weights according to the acquisition geometry. After the weights are introduced in the objective function, the inversion falls into the category of separable nonlinear least-squares problems. Hence, we could use the variable projection technique widely used in source estimation problem to invert the model parameter without the knowledge of source and receiver factors. The efficacy of the inversion scheme is demonstrated with two synthetic examples and one real data test.


2014 ◽  
Vol 18 (6) ◽  
pp. 2393-2413 ◽  
Author(s):  
H. Sellami ◽  
I. La Jeunesse ◽  
S. Benabdallah ◽  
N. Baghdadi ◽  
M. Vanclooster

Abstract. In this study a method for propagating the hydrological model uncertainty in discharge predictions of ungauged Mediterranean catchments using a model parameter regionalization approach is presented. The method is developed and tested for the Thau catchment located in Southern France using the SWAT hydrological model. Regionalization of model parameters, based on physical similarity measured between gauged and ungauged catchment attributes, is a popular methodology for discharge prediction in ungauged basins, but it is often confronted with an arbitrary criterion for selecting the "behavioral" model parameter sets (Mps) at the gauged catchment. A more objective method is provided in this paper where the transferrable Mps are selected based on the similarity between the donor and the receptor catchments. In addition, the method allows propagating the modeling uncertainty while transferring the Mps to the ungauged catchments. Results indicate that physically similar catchments located within the same geographic and climatic region may exhibit similar hydrological behavior and can also be affected by similar model prediction uncertainty. Furthermore, the results suggest that model prediction uncertainty at the ungauged catchment increases as the dissimilarity between the donor and the receptor catchments increases. The methodology presented in this paper can be replicated and used in regionalization of any hydrological model parameters for estimating streamflow at ungauged catchment.


1981 ◽  
Vol 240 (5) ◽  
pp. R259-R265 ◽  
Author(s):  
J. J. DiStefano

Design of optimal blood sampling protocols for kinetic experiments is discussed and evaluated, with the aid of several examples--including an endocrine system case study. The criterion of optimality is maximum accuracy of kinetic model parameter estimates. A simple example illustrates why a sequential experiment approach is required; optimal designs depend on the true model parameter values, knowledge of which is usually a primary objective of the experiment, as well as the structure of the model and the measurement error (e.g., assay) variance. The methodology is evaluated from the results of a series of experiments designed to quantify the dynamics of distribution and metabolism of three iodothyronines, T3, T4, and reverse-T3. This analysis indicates that 1) the sequential optimal experiment approach can be effective and efficient in the laboratory, 2) it works in the presence of reasonably controlled biological variation, producing sufficiently robust sampling protocols, and 3) optimal designs can be highly efficient designs in practice, requiring for maximum accuracy a number of blood samples equal to the number of independently adjustable model parameters, no more or less.


Author(s):  
Suryanarayana R. Pakalapati ◽  
Hayri Sezer ◽  
Ismail B. Celik

Dual number arithmetic is a well-known strategy for automatic differentiation of computer codes which gives exact derivatives, to the machine accuracy, of the computed quantities with respect to any of the involved variables. A common application of this concept in Computational Fluid Dynamics, or numerical modeling in general, is to assess the sensitivity of mathematical models to the model parameters. However, dual number arithmetic, in theory, finds the derivatives of the actual mathematical expressions evaluated by the computer code. Thus the sensitivity to a model parameter found by dual number automatic differentiation is essentially that of the combination of the actual mathematical equations, the numerical scheme and the grid used to solve the equations not just that of the model equations alone as implied by some studies. This aspect of the sensitivity analysis of numerical simulations using dual number auto derivation is explored in the current study. A simple one-dimensional advection diffusion equation is discretized using different schemes of finite volume method and the resulting systems of equations are solved numerically. Derivatives of the numerical solutions with respect to parameters are evaluated automatically using dual number automatic differentiation. In addition the derivatives are also estimated using finite differencing for comparison. The analytical solution was also found for the original PDE and derivatives of this solution are also computed analytically. It is shown that a mathematical model could potentially show different sensitivity to a model parameter depending on the numerical method employed to solve the equations and the grid resolution used. This distinction is important since such inter-dependence needs to be carefully addressed to avoid confusion when reporting the sensitivity of predictions to a model parameter using a computer code. A systematic assessment of numerical uncertainty in the sensitivities computed using automatic differentiation is presented.


2021 ◽  
Author(s):  
Jingshui Huang ◽  
Pablo Merchan-Rivera ◽  
Gabriele Chiogna ◽  
Markus Disse ◽  
Michael Rode

<p>Water quality models offer to study dissolved oxygen (DO) dynamics and resulting DO balances. However, the infrequent temporal resolution of measurement data commonly limits the reliability of disentangling and quantifying instream DO process fluxes using models. These limitations of the temporal data resolution can result in the equifinality of model parameter sets. In this study, we aim to quantify the effect of the combination of emerging high-frequency monitoring techniques and water quality modelling for 1) improving the estimation of the model parameters and 2) reducing the forward uncertainty of the continuous quantification of instream DO balance pathways.</p><p>To this end, synthetic measurements for calibration with a given series of frequencies are used to estimate the model parameters of a conceptual water quality model of an agricultural river in Germany. The frequencies vary from the 15-min interval, daily, weekly, to monthly. A Bayesian inference approach using the DREAM algorithm is adopted to perform the uncertainty analysis of DO simulation. Furthermore, the propagated uncertainties in daily fluxes of different DO processes, including reaeration, phytoplankton metabolism, benthic algae metabolism, nitrification, and organic matter deoxygenation, are quantified.</p><p>We hypothesize that the uncertainty will be larger when the measurement frequency of calibrated data was limited. We also expect that the high-frequency measurements significantly reduce the uncertainty of flux estimations of different DO balance components. This study highlights the critical role of high-frequency data supporting model parameter estimation and its significant value in disentangling DO processes.</p>


Sign in / Sign up

Export Citation Format

Share Document