scholarly journals Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

2013 ◽  
Vol 10 (4) ◽  
pp. 5077-5119 ◽  
Author(s):  
Y. Sun ◽  
Z. Hou ◽  
M. Huang ◽  
F. Tian ◽  
L. R. Leung

Abstract. This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent – as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

2013 ◽  
Vol 17 (12) ◽  
pp. 4995-5011 ◽  
Author(s):  
Y. Sun ◽  
Z. Hou ◽  
M. Huang ◽  
F. Tian ◽  
L. Ruby Leung

Abstract. This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.


2014 ◽  
Vol 2014 (1) ◽  
pp. 1113-1125
Author(s):  
Xiaolong Geng ◽  
Michel C. Boufadel

ABSTRACT In April 2010, the explosion of the Deepwater Horizon (DWH) drilling platform led to the release of nearly 4.9 million barrels of crude oil into the Gulf of Mexico. The oil was brought to the supratidal zone of beaches (landward of the high tide line) by waves during storms, and was buried during subsequent storms. The objective of this paper is to investigate the biodegradation of subsurface oil in a tidally influenced sand beach located at Bon Secour National Wildlife Refuge and polluted by the DWH oil spill. Two transects were installed perpendicular to the shoreline within the supratidal zone of the beach. One transect had four galvanized steel piezometer wells to measure the water level. The other transect had four stainless steel multiport sampling wells that were used to collect pore water samples below the beach surface. The samples were analyzed for dissolved oxygen (DO), nitrogen, and redox conditions. Sediment samples were also collected at different depths to measure residual oil concentrations and microbial biomass. As the biodegradation of hydrocarbons was of interest, a biological model based on Monod kinetics was developed and coupled to the transport model MARUN, which is a two dimensional (vertical slice) finite element model for water flow and solute transport in tidally influenced beaches. The resulting coupled model, BIOMARUN, was used to simulate the biodegradation of total n-alkanes and polycyclic aromatic hydrocarbons (PAHs) trapped as residual oil in the unsaturated zone. Model parameter estimates were constrained by published Monod kinetics parameters. The field measurements, such as the concentrations of the oil, microbial biomass, nitrogen, and DO, were used as inputs for the simulations. The biodegradation of alkanes and PAHs was predicted in the simulation, and sensitivity analyses were conducted to assess the effect of the model parameters on the modeling results. Simulation results indicated that n-alkanes and PAHs would be biodegraded by 80% after 2 ± 0.5 years and 3.5 ± 0.5 years, respectively.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sima Azizi ◽  
Daniel B. Hier ◽  
Blaine Allen ◽  
Tayo Obafemi-Ajayi ◽  
Gayla R. Olbricht ◽  
...  

Traumatic brain injury (TBI) imposes a significant economic and social burden. The diagnosis and prognosis of mild TBI, also called concussion, is challenging. Concussions are common among contact sport athletes. After a blow to the head, it is often difficult to determine who has had a concussion, who should be withheld from play, if a concussed athlete is ready to return to the field, and which concussed athlete will develop a post-concussion syndrome. Biomarkers can be detected in the cerebrospinal fluid and blood after traumatic brain injury and their levels may have prognostic value. Despite significant investigation, questions remain as to the trajectories of blood biomarker levels over time after mild TBI. Modeling the kinetic behavior of these biomarkers could be informative. We propose a one-compartment kinetic model for S100B, UCH-L1, NF-L, GFAP, and tau biomarker levels after mild TBI based on accepted pharmacokinetic models for oral drug absorption. We approximated model parameters using previously published studies. Since parameter estimates were approximate, we did uncertainty and sensitivity analyses. Using estimated kinetic parameters for each biomarker, we applied the model to an available post-concussion biomarker dataset of UCH-L1, GFAP, tau, and NF-L biomarkers levels. We have demonstrated the feasibility of modeling blood biomarker levels after mild TBI with a one compartment kinetic model. More work is needed to better establish model parameters and to understand the implications of the model for diagnostic use of these blood biomarkers for mild TBI.


2018 ◽  
Author(s):  
Sean Patrick Lane ◽  
Erin Hennes

Introduction: A priori power analysis is increasingly being recognized as a useful tool for designing efficient research studies that improve the probability of robust and publishable results. However, power analyses for many empirical designs in the addiction sciences require consideration of numerous parameters. Identifying appropriate parameter estimates is challenging due to multiple sources of uncertainty, which can limit power analyses’ utility. Method: We demonstrate a sensitivity analysis approach for systematically investigating the impact of various model parameters on power. We illustrate this approach using three design aspects of importance for substance use researchers conducting longitudinal studies ─ base rates, individual differences (i.e., random slopes), and correlated predictors (e.g., co-use) ─ and examine how sensitivity analyses can illuminate strategies for controlling power vulnerabilities in such parameters.Results: Even large numbers of participants and/or repeated assessments can be insufficient to observe associations when substance use base rates are too low or too high. Large individual differences can adversely affect power, even with increased assessments. Collinear predictors are rarely detrimental unless the correlation is high.Conclusions: Increasing participants is usually more effective at buffering power than increasing assessments. Research designs can often enhance power by assessing participants twice as frequently as substance use occurs. Heterogeneity should be carefully estimated or empirically controlled, whereas collinearity infrequently impacts power significantly. Sensitivity analyses can identify regions of model parameter spaces that are vulnerable to bad guesses or sampling variability. These insights can be used to design robust studies that make optimal use of limited resources.


2021 ◽  
Author(s):  
Udo Boehm ◽  
Nathan J. Evans ◽  
Quentin Frederik Gronau ◽  
Dora Matzke ◽  
Eric-Jan Wagenmakers ◽  
...  

Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the non-linearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work we show how these challenges can be addressed through a combination of Bayesian hierarchical modelling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study.


2021 ◽  
pp. 1-9
Author(s):  
Baigang Zhao ◽  
Xianku Zhang

Abstract To solve the problem of identifying ship model parameters quickly and accurately with the least test data, this paper proposes a nonlinear innovation parameter identification algorithm for ship models. This is based on a nonlinear arc tangent function that can process innovations on the basis of an original stochastic gradient algorithm. A simulation was carried out on the ship Yu Peng using 26 sets of test data to compare the parameter identification capability of a least square algorithm, the original stochastic gradient algorithm and the improved stochastic gradient algorithm. The results indicate that the improved algorithm enhances the accuracy of the parameter identification by about 12% when compared with the least squares algorithm. The effectiveness of the algorithm was further verified by a simulation of the ship Yu Kun. The results confirm the algorithm's capacity to rapidly produce highly accurate parameter identification on the basis of relatively small datasets. The approach can be extended to other parameter identification systems where only a small amount of test data is available.


Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1610
Author(s):  
Katia Colaneri ◽  
Alessandra Cretarola ◽  
Benedetta Salterini

In this paper, we study the optimal investment and reinsurance problem of an insurance company whose investment preferences are described via a forward dynamic exponential utility in a regime-switching market model. Financial and actuarial frameworks are dependent since stock prices and insurance claims vary according to a common factor given by a continuous time finite state Markov chain. We construct the value function and we prove that it is a forward dynamic utility. Then, we characterize the optimal investment strategy and the optimal proportional level of reinsurance. We also perform numerical experiments and provide sensitivity analyses with respect to some model parameters.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


2004 ◽  
Vol 14 (04n05) ◽  
pp. 261-276 ◽  
Author(s):  
NILOY J. MITRA ◽  
AN NGUYEN ◽  
LEONIDAS GUIBAS

In this paper we describe and analyze a method based on local least square fitting for estimating the normals at all sample points of a point cloud data (PCD) set, in the presence of noise. We study the effects of neighborhood size, curvature, sampling density, and noise on the normal estimation when the PCD is sampled from a smooth curve in ℝ2or a smooth surface in ℝ3, and noise is added. The analysis allows us to find the optimal neighborhood size using other local information from the PCD. Experimental results are also provided.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


Sign in / Sign up

Export Citation Format

Share Document