scholarly journals Constraining Land Surface and Atmospheric Parameters of a Locally Coupled Model Using Observational Data

2005 ◽  
Vol 6 (2) ◽  
pp. 156-172 ◽  
Author(s):  
Yuqiong Liu ◽  
Hoshin V. Gupta ◽  
Soroosh Sorooshian ◽  
Luis A. Bastidas ◽  
William J. Shuttleworth

Abstract In coupled land surface–atmosphere modeling, the possibility and benefits of constraining model parameters using observational data bear investigation. Using the locally coupled NCAR Single-column Community Climate Model (NCAR SCCM), this study demonstrates some feasible, effective approaches to constrain parameter estimates for coupled land–atmosphere models and explores the effects of including both land surface and atmospheric parameters and fluxes/variables in the parameter estimation process, as well as the value of conducting the process in a stepwise manner. The results indicate that the use of both land surface and atmospheric flux variables to construct error criteria can lead to better-constrained parameter sets. The model with “optimal” parameters generally performs better than when a priori parameters are used, especially when some atmospheric parameters are included in the parameter estimation process. The overall conclusion is that, to achieve balanced, reasonable model performance on all variables, it is desirable to optimize both land surface and atmospheric parameters and use both land surface and atmospheric fluxes/variables for error criteria in the optimization process. The results also show that, for a coupled land–atmosphere model, there are potential advantages to using a stepwise procedure in which the land surface parameters are first identified in offline mode, after which the atmospheric parameters are determined in coupled mode. This stepwise scheme appears to provide comparable solutions to a fully coupled approach, but with considerably reduced computational time. The trade-off in the ability of a model to satisfactorily simulate different processes simultaneously, as observed in most multicriteria studies, is most evident for sensible heat and precipitation in this study for the NCAR SCCM.

Processes ◽  
2018 ◽  
Vol 6 (8) ◽  
pp. 100 ◽  
Author(s):  
Zhenyu Wang ◽  
Hana Sheikh ◽  
Kyongbum Lee ◽  
Christos Georgakis

Due to the complicated metabolism of mammalian cells, the corresponding dynamic mathematical models usually consist of large sets of differential and algebraic equations with a large number of parameters to be estimated. On the other hand, the measured data for estimating the model parameters are limited. Consequently, the parameter estimates may converge to a local minimum far from the optimal ones, especially when the initial guesses of the parameter values are poor. The methodology presented in this paper provides a systematic way for estimating parameters sequentially that generates better initial guesses for parameter estimation and improves the accuracy of the obtained metabolic model. The model parameters are first classified into four subsets of decreasing importance, based on the sensitivity of the model’s predictions on the parameters’ assumed values. The parameters in the most sensitive subset, typically a small fraction of the total, are estimated first. When estimating the remaining parameters with next most sensitive subset, the subsets of parameters with higher sensitivities are estimated again using their previously obtained optimal values as the initial guesses. The power of this sequential estimation approach is illustrated through a case study on the estimation of parameters in a dynamic model of CHO cell metabolism in fed-batch culture. We show that the sequential parameter estimation approach improves model accuracy and that using limited data to estimate low-sensitivity parameters can worsen model performance.


Author(s):  
James R. McCusker ◽  
Kourosh Danai

A method of parameter estimation was recently introduced that separately estimates each parameter of the dynamic model [1]. In this method, regions coined as parameter signatures, are identified in the time-scale domain wherein the prediction error can be attributed to the error of a single model parameter. Based on these single-parameter associations, individual model parameters can then be estimated for iterative estimation. Relative to nonlinear least squares, the proposed Parameter Signature Isolation Method (PARSIM) has two distinct attributes. One attribute of PARSIM is to leave the estimation of a parameter dormant when a parameter signature cannot be extracted for it. Another attribute is independence from the contour of the prediction error. The first attribute could cause erroneous parameter estimates, when the parameters are not adapted continually. The second attribute, on the other hand, can provide a safeguard against local minima entrapments. These attributes motivate integrating PARSIM with a method, like nonlinear least-squares, that is less prone to dormancy of parameter estimates. The paper demonstrates the merit of the proposed integrated approach in application to a difficult estimation problem.


2020 ◽  
Vol 126 (4) ◽  
pp. 559-570 ◽  
Author(s):  
Ming Wang ◽  
Neil White ◽  
Jim Hanan ◽  
Di He ◽  
Enli Wang ◽  
...  

Abstract Background and Aims Functional–structural plant (FSP) models provide insights into the complex interactions between plant architecture and underlying developmental mechanisms. However, parameter estimation of FSP models remains challenging. We therefore used pattern-oriented modelling (POM) to test whether parameterization of FSP models can be made more efficient, systematic and powerful. With POM, a set of weak patterns is used to determine uncertain parameter values, instead of measuring them in experiments or observations, which often is infeasible. Methods We used an existing FSP model of avocado (Persea americana ‘Hass’) and tested whether POM parameterization would converge to an existing manual parameterization. The model was run for 10 000 parameter sets and model outputs were compared with verification patterns. Each verification pattern served as a filter for rejecting unrealistic parameter sets. The model was then validated by running it with the surviving parameter sets that passed all filters and then comparing their pooled model outputs with additional validation patterns that were not used for parameterization. Key Results POM calibration led to 22 surviving parameter sets. Within these sets, most individual parameters varied over a large range. One of the resulting sets was similar to the manually parameterized set. Using the entire suite of surviving parameter sets, the model successfully predicted all validation patterns. However, two of the surviving parameter sets could not make the model predict all validation patterns. Conclusions Our findings suggest strong interactions among model parameters and their corresponding processes, respectively. Using all surviving parameter sets takes these interactions into account fully, thereby improving model performance regarding validation and model output uncertainty. We conclude that POM calibration allows FSP models to be developed in a timely manner without having to rely on field or laboratory experiments, or on cumbersome manual parameterization. POM also increases the predictive power of FSP models.


2020 ◽  
Author(s):  
Stephan Thober ◽  
Matthias Kelbling ◽  
Florian Pappenberger ◽  
Christel Prudhomme ◽  
Gianpaolo Balsamo ◽  
...  

<p>The representation of the water and energy cycle in environmental models is closely linked to the parameter values used in the process parametrizations. The dimension of the parameter space in spatially distributed environmental models corresponds to the number of grid cells multiplied by the number of parameters per grid cell. For large-scale simulations on national and continental scales, the dimensionality of the parameter space is too high for efficient parameter estimation using inverse estimation methods. A regularization of the parameter space is necessary to reduce its dimensionality. The Multiscale Parameter Regionalization (MPR) is one approach to achieve this.</p><p>MPR translates local geophysical properties into model parameters. It consists of two steps: 1) local high-resolution geophysical data sets (e.g. soil maps) are translated into model parameters using a transfer function. 2) the high-resolution model parameters are scaled to the model resolution using suitable upscaling operators (e.g., harmonic mean). The MPR technique was introduced into the mesoscale hydrologic model (mHM, Samaniego et al. 2010, Kumar et al. 2013) and it is key factor for its success on transferring parameters across scales and locations.  </p><p>In this study, we apply MPR to vegetation and soil parameters in the land surface model HTESSEL. This model is the land-surface component of the European Centre for Medium-Range Weather Forecasting seasonal forecasting system. About 100 hard-coded parameters have been extracted to allow for a comprehensive sensitivity analysis and parameter estimation.</p><p>We analyze simulated evaporation and runoff fluxes by HTESSEL using parameters estimated by MPR in comparison to a default HTESSEL setup over Europe. The magnitude of simulated long-term fluxes deviates the most (up to 10% and 20% for evapotranspiration and runoff, respectively) in regions with a large subgrid variability in geophysical attributes (e.g., soil texture). The choice of transfer functions and upscaling operators influences the magnitude of these differences and governs model performance assessed after calibration against observations (e.g. streamflow).</p><p><strong>References:</strong></p><p>Samaniego L., et al.  <strong>https://doi.org/10.1029/2008WR007327</strong></p><p>Kumar, R., et al.  <strong>https://doi.org/10.1029/2012WR012195</strong></p>


2004 ◽  
Vol 35 (3) ◽  
pp. 191-208 ◽  
Author(s):  
Oddbjørn Bruland ◽  
Glen E. Liston ◽  
Jorien Vonk ◽  
Knut Sand ◽  
Ånund Killingtveit

In Arctic regions snow cover has a major influence on the environment both in a hydrological and ecological context. Due to strong winds and open terrain the snow is heavily redistributed and the snow depth is quite variable. This has a significant influence on the snow cover depletion and the duration of the melting season. In many ways these are important parameters in the climate change aspect. They influence the land surface albedo, the possibilities of greenhouse gas exchange and the length of the plant-growing season, the latter also being important for the arctic terrestrial fauna. The aim of this study is to test to what degree a numerical model is able to recreate an observed snow distribution in sites located in Svalbard and Norway. Snow depth frequency distribution, a snow depth rank order test and the location of snowdrifts and erosion areas were used as criteria for the model performance. SnowTran-3D is the model used in this study. In order to allow for occasions during the winter with milder climate and temperatures above freezing, a snow strengthening calculation was included in the model. The model result was compared to extensive observation datasets for each site and the sensitivity of the main model parameters to the model result was tested. For all three sites, the modelled snow depth frequency distribution was highly correlated to the observed distribution and the snowdrifts and erosion areas were located correspondingly by the model to those observed at the sites.


2018 ◽  
Vol 11 (8) ◽  
pp. 3313-3325 ◽  
Author(s):  
Alex G. Libardoni ◽  
Chris E. Forest ◽  
Andrei P. Sokolov ◽  
Erwan Monier

Abstract. For over 20 years, the Massachusetts Institute of Technology Earth System Model (MESM) has been used extensively for climate change research. The model is under continuous development with components being added and updated. To provide transparency in the model development, we perform a baseline evaluation by comparing model behavior and properties in the newest version to the previous model version. In particular, changes resulting from updates to the land surface model component and the input forcings used in historical simulations of climate change are investigated. We run an 1800-member ensemble of MESM historical climate simulations where the model parameters that set climate sensitivity, the rate of ocean heat uptake, and the net anthropogenic aerosol forcing are systematically varied. By comparing model output to observed patterns of surface temperature changes and the linear trend in the increase in ocean heat content, we derive probability distributions for the three model parameters. Furthermore, we run a 372-member ensemble of transient climate simulations where all model forcings are fixed and carbon dioxide concentrations are increased at the rate of 1 % year−1. From these runs, we derive response surfaces for transient climate response and thermosteric sea level rise as a function of climate sensitivity and ocean heat uptake. We show that the probability distributions shift towards higher climate sensitivities and weaker aerosol forcing when using the new model and that the climate response surfaces are relatively unchanged between model versions. Because the response surfaces are independent of the changes to the model forcings and similar between model versions with different land surface models, we suggest that the change in land surface model has limited impact on the temperature evolution in the model. Thus, we attribute the shifts in parameter estimates to the updated model forcings.


2011 ◽  
Vol 15 (11) ◽  
pp. 3591-3603 ◽  
Author(s):  
R. Singh ◽  
T. Wagener ◽  
K. van Werkhoven ◽  
M. E. Mann ◽  
R. Crane

Abstract. Projecting how future climatic change might impact streamflow is an important challenge for hydrologic science. The common approach to solve this problem is by forcing a hydrologic model, calibrated on historical data or using a priori parameter estimates, with future scenarios of precipitation and temperature. However, several recent studies suggest that the climatic regime of the calibration period is reflected in the resulting parameter estimates and model performance can be negatively impacted if the climate for which projections are made is significantly different from that during calibration. So how can we calibrate a hydrologic model for historically unobserved climatic conditions? To address this issue, we propose a new trading-space-for-time framework that utilizes the similarity between the predictions under change (PUC) and predictions in ungauged basins (PUB) problems. In this new framework we first regionalize climate dependent streamflow characteristics using 394 US watersheds. We then assume that this spatial relationship between climate and streamflow characteristics is similar to the one we would observe between climate and streamflow over long time periods at a single location. This assumption is what we refer to as trading-space-for-time. Therefore, we change the limits for extrapolation to future climatic situations from the restricted locally observed historical variability to the variability observed across all watersheds used to derive the regression relationships. A typical watershed model is subsequently calibrated (conditioned) on the predicted signatures for any future climate scenario to account for the impact of climate on model parameters within a Bayesian framework. As a result, we can obtain ensemble predictions of continuous streamflow at both gauged and ungauged locations. The new method is tested in five US watersheds located in historically different climates using synthetic climate scenarios generated by increasing mean temperature by up to 8 °C and changing mean precipitation by −30% to +40% from their historical values. Depending on the aridity of the watershed, streamflow projections using adjusted parameters became significantly different from those using historically calibrated parameters if precipitation change exceeded −10% or +20%. In general, the trading-space-for-time approach resulted in a stronger watershed response to climate change for both high and low flow conditions.


1993 ◽  
Vol 27 (9) ◽  
pp. 1034-1039 ◽  
Author(s):  
Ene I. Ette ◽  
Andrew W. Kelman ◽  
Catherine A. Howie ◽  
Brian Whiting

OBJECTIVE: To develop new approaches for evaluating results obtained from simulation studies used to determine sampling strategies for efficient estimation of population pharmacokinetic parameters. METHODS: One-compartment kinetics with intravenous bolus injection was assumed and the simulated data (one observation made on each experimental unit [human subject or animal]), were analyzed using NONMEM. Several approaches were used to judge the efficiency of parameter estimation. These included: (1) individual and joint confidence intervals (CIs) coverage for parameter estimates that were computed in a manner that would reveal the influence of bias and standard error (SE) on interval estimates; (2) percent prediction error (%PE) approach; (3) the incidence of high pair-wise correlations; and (4) a design number approach. The design number (Φ) is a new statistic that provides a composite measure of accuracy and precision (using SE). RESULTS: The %PE approach is useful only in examining the efficiency of estimation of a parameter considered independently. The joint CI coverage approach permitted assessment of the accuracy and reliability of all model parameter estimates. The Φ approach is an efficient method of achieving an accurate estimate of parameter(s) with good precision. Both the Φ for individual parameter estimation and the overall Φ for the estimation of model parameters led to optimal experimental design. CONCLUSIONS: Application of these approaches to the analyses of the results of the study was found useful in determining the best sampling design (from a series of two sampling times designs within a study) for efficient estimation of population pharmacokinetic parameters.


Processes ◽  
2018 ◽  
Vol 6 (11) ◽  
pp. 231 ◽  
Author(s):  
Ernie Che Mid ◽  
Vivek Dua

In this work, a methodology for fault detection in wastewater treatment systems, based on parameter estimation, using multiparametric programming is presented. The main idea is to detect faults by estimating model parameters, and monitoring the changes in residuals of model parameters. In the proposed methodology, a nonlinear dynamic model of wastewater treatment was discretized to algebraic equations using Euler’s method. A parameter estimation problem was then formulated and transformed into a square system of parametric nonlinear algebraic equations by writing the optimality conditions. The parametric nonlinear algebraic equations were then solved symbolically to obtain the concentration of substrate in the inflow, , inhibition coefficient, , and specific growth rate, , as an explicit function of state variables (concentration of biomass, ; concentration of organic matter, ; concentration of dissolved oxygen, ; and volume, ). The estimated model parameter values were compared with values from the normal operation. If the residual of model parameters exceeds a certain threshold value, a fault is detected. The application demonstrates the viability of the approach, and highlights its ability to detect faults in wastewater treatment systems by providing quick and accurate parameter estimates using the evaluation of explicit parametric functions.


2013 ◽  
Vol 17 (1) ◽  
pp. 149-161 ◽  
Author(s):  
S. Gharari ◽  
M. Hrachowitz ◽  
F. Fenicia ◽  
H. H. G. Savenije

Abstract. Conceptual hydrological models rely on calibration for the identification of their parameters. As these models are typically designed to reflect real catchment processes, a key objective of an appropriate calibration strategy is the determination of parameter sets that reflect a "realistic" model behavior. Previous studies have shown that parameter estimates for different calibration periods can be significantly different. This questions model transposability in time, which is one of the key conditions for the set-up of a "realistic" model. This paper presents a new approach that selects parameter sets that provide a consistent model performance in time. The approach consists of testing model performance in different periods, and selecting parameter sets that are as close as possible to the optimum of each individual sub-period. While aiding model calibration, the approach is also useful as a diagnostic tool, illustrating tradeoffs in the identification of time-consistent parameter sets. The approach is applied to a case study in Luxembourg using the HyMod hydrological model as an example.


Sign in / Sign up

Export Citation Format

Share Document