scholarly journals Linking biogeochemistry to hydro-geometrical variability in tidal estuaries: a generic modeling approach

2016 ◽  
Vol 20 (3) ◽  
pp. 991-1030 ◽  
Author(s):  
Chiara Volta ◽  
Goulven Gildas Laruelle ◽  
Sandra Arndt ◽  
Pierre Regnier

Abstract. This study applies the Carbon-Generic Estuary Model (C-GEM) modeling platform to simulate the estuarine biogeochemical dynamics – in particular the air–water CO2 exchange – in three idealized tidal estuaries characterized by increasing riverine influence, from a so-called "marine estuary" to a "riverine estuary". An intermediate case called "mixed estuary" is also considered. C-GEM uses a generic biogeochemical reaction network and a unique set of model parameters extracted from a comprehensive literature survey to perform steady-state simulations representing average conditions for temperate estuaries worldwide. Climate and boundary conditions are extracted from published global databases (e.g., World Ocean Atlas, GLORICH) and catchment model outputs (GlobalNEWS2). The whole-system biogeochemical indicators net ecosystem metabolism (NEM), C and N filtering capacities (FCTC and FCTN, respectively) and CO2 gas exchanges (FCO2) are calculated across the three idealized systems and are related to their main hydrodynamic and transport characteristics. A sensitivity analysis, which propagates the parameter uncertainties, is also carried out, followed by projections of changes in the biogeochemical indicators for the year 2050. Results show that the average C filtering capacities for baseline conditions are 40, 30 and 22 % for the marine, mixed and riverine estuary, respectively, while N filtering capacities, calculated in a similar fashion, range from 22 % for the marine estuary to 18 and 15 % for the mixed and the riverine estuaries. Sensitivity analysis performed by varying the rate constants for aerobic degradation, denitrification and nitrification over the range of values reported in the literature significantly widens these ranges for both C and N. Simulations for the year 2050 suggest that all estuaries will remain largely heterotrophic, although a slight improvement of the estuarine trophic status is predicted. In addition, our results suggest that, while the riverine and mixed systems will only marginally be affected by an increase in atmospheric pCO2, the marine estuary is likely to become a significant CO2 sink in its downstream section. In the decades to come, such a change in behavior might strengthen the overall CO2 sink of the estuary–coastal ocean continuum.

2015 ◽  
Vol 12 (7) ◽  
pp. 6351-6435
Author(s):  
C. Volta ◽  
G. G. Laruelle ◽  
S. Arndt ◽  
P. Regnier

Abstract. This study applies the Carbon-Generic Estuary Model (C-GEM) modeling platform to simulate the estuarine biogeochemical dynamics – in particular the air-water CO2 exchange – in three idealized end-member systems covering the main features of tidal alluvial estuaries. C-GEM uses a generic biogeochemical reaction network and a unique set of model parameters extracted from a comprehensive literature survey to perform steady-state simulations representing average conditions for temperate estuaries worldwide. Climate and boundary conditions are extracted from published global databases (e.g. World Ocean Atlas, GLORICH) and catchment model outputs (GlobalNEWS2). The whole-system biogeochemical indicators Net Ecosystem Metabolism (NEM), C and N filtering capacities (FCTC and FCTN, respectively) and CO2 gas exchanges (FCO2) are calculated across the three end-member systems and are related to their main hydrodynamic and transport characteristics. A sensitivity analysis, which propagates the parameter uncertainties, is also carried out, followed by projections of changes in the biogeochemical indicators for the year 2050. Results show that the average C filtering capacities for baseline conditions are 40, 30 and 22% for the marine, mixed and riverine estuary, respectively. This translates into a first-order, global CO2 outgassing flux for tidal estuaries between 0.04 and 0.07 Pg C yr−1. N filtering capacities, calculated in similar fashion, range from 22% for the marine estuary to 18 and 15% for the mixed and the riverine estuary, respectively. Sensitivity analysis performed by varying the rate constants for aerobic degradation, denitrification and nitrification over the range of values reported in the literature significantly widens these ranges for both C and N. Simulations for the year 2050 indicate that all end-member estuaries will remain net heterotrophic and while the riverine and mixed systems will only marginally be affected by river load changes and increase in atmospheric pCO2, the marine estuary is likely to become a significant CO2 sink in its downstream section. In the decades to come, such change of behavior might strengthen the overall CO2 sink of the estuary-coastal ocean continuum.


Author(s):  
Souransu Nandi ◽  
Tarunraj Singh

The focus of this paper is on the global sensitivity analysis (GSA) of linear systems with time-invariant model parameter uncertainties and driven by stochastic inputs. The Sobol' indices of the evolving mean and variance estimates of states are used to assess the impact of the time-invariant uncertain model parameters and the statistics of the stochastic input on the uncertainty of the output. Numerical results on two benchmark problems help illustrate that it is conceivable that parameters, which are not so significant in contributing to the uncertainty of the mean, can be extremely significant in contributing to the uncertainty of the variances. The paper uses a polynomial chaos (PC) approach to synthesize a surrogate probabilistic model of the stochastic system after using Lagrange interpolation polynomials (LIPs) as PC bases. The Sobol' indices are then directly evaluated from the PC coefficients. Although this concept is not new, a novel interpretation of stochastic collocation-based PC and intrusive PC is presented where they are shown to represent identical probabilistic models when the system under consideration is linear. This result now permits treating linear models as black boxes to develop intrusive PC surrogates.


2021 ◽  
Vol 247 ◽  
pp. 20005
Author(s):  
Dan G. Cacuci

This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.


Author(s):  
Liu Xin ◽  
Hu Benxue ◽  
Wang Zhe ◽  
Wang Guodong ◽  
Wang Zhangli ◽  
...  

Compared with conservation evaluation model, best estimate plus uncertainty (BEPU) method can obtain more realistic results and gain larger license margins with respect to the safety criteria. In view of this, a BEPU method named 4S (SNERDI Statistical Solution for Safety) has been developed, according to the basic principles of evaluation model development and assessment of RG 1.203. The characteristics of 4S method are as follows: The output uncertainty is quantified by using random sampling and propagation of input uncertainties. Global sensitivity analysis is used to support PIRT establishment. Uncertainties of model parameters are calibrated and validated by using separate effects tests considering measuring uncertainties. DAKOTA code is used for uncertainty and sensitivity analysis. An automatic BEPU analysis platform has been developed by coupling DAKOTA and different reactor safety analysis codes, and code calculations can be performed in parallel. BEPU analysis of mass and energy release and containment pressure response of CAP1400 under a postulated double-ended cold leg break loss of coolant accident (DECL LOCA) has been carried out by coupling DAKOTA, a mass and energy release analysis code and a containment analysis code. In total, 21 uncertain input parameters are considered. To make the results more stable, the sample size is 124 and the third highest peak pressure is used as the pressure upper bound (with 95%/95% probability/confidence) based on Wilks’ formula. The calculated results show that the peak pressure upper bound is obviously lower than the present conservation method used in license application, with more than 10% analysis margin. Influences of input parameter uncertainties on the containment peak pressure have been analyzed, according to the partial rank correlation coefficients calculated by DAKOTA. The results show that the input parameters mainly affecting the peak pressure are the containment condensation heat transfer multiplier, initial containment temperature, break resistance, decay heat, initial containment pressure, Core Makeup Tank (CMT) resistance multiplier and initial containment humidity.


2021 ◽  
Vol 247 ◽  
pp. 00002
Author(s):  
Dan G. Cacuci

This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.


Agriculture ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 624
Author(s):  
Yan Shan ◽  
Mingbin Huang ◽  
Paul Harris ◽  
Lianhai Wu

A sensitivity analysis is critical for determining the relative importance of model parameters to their influence on the simulated outputs from a process-based model. In this study, a sensitivity analysis for the SPACSYS model, first published in Ecological Modelling (Wu, et al., 2007), was conducted with respect to changes in 61 input parameters and their influence on 27 output variables. Parameter sensitivity was conducted in a ‘one at a time’ manner and objectively assessed through a single statistical diagnostic (normalized root mean square deviation) which ranked parameters according to their influence of each output variable in turn. A winter wheat field experiment provided the case study data. Two sets of weather elements to represent different climatic conditions and four different soil types were specified, where results indicated little influence on these specifications for the identification of the most sensitive parameters. Soil conditions and management were found to affect the ranking of parameter sensitivities more strongly than weather conditions for the selected outputs. Parameters related to drainage were strongly influential for simulations of soil water dynamics, yield and biomass of wheat, runoff, and leaching from soil during individual and consecutive growing years. Wheat yield and biomass simulations were sensitive to the ‘ammonium immobilised fraction’ parameter that related to soil mineralization and immobilisation. Simulations of CO2 release from the soil and soil nutrient pool changes were most sensitive to external nutrient inputs and the process of denitrification, mineralization, and decomposition. This study provides important evidence of which SPACSYS parameters require the most care in their specification. Moving forward, this evidence can help direct efficient sampling and lab analyses for increased accuracy of such parameters. Results provide a useful reference for model users on which parameters are most influential for different simulation goals, which in turn provides better informed decision making for farmers and government policy alike.


Author(s):  
Sebastian Brandstaeter ◽  
Sebastian L. Fuchs ◽  
Jonas Biehler ◽  
Roland C. Aydin ◽  
Wolfgang A. Wall ◽  
...  

AbstractGrowth and remodeling in arterial tissue have attracted considerable attention over the last decade. Mathematical models have been proposed, and computational studies with these have helped to understand the role of the different model parameters. So far it remains, however, poorly understood how much of the model output variability can be attributed to the individual input parameters and their interactions. To clarify this, we propose herein a global sensitivity analysis, based on Sobol indices, for a homogenized constrained mixture model of aortic growth and remodeling. In two representative examples, we found that 54–80% of the long term output variability resulted from only three model parameters. In our study, the two most influential parameters were the one characterizing the ability of the tissue to increase collagen production under increased stress and the one characterizing the collagen half-life time. The third most influential parameter was the one characterizing the strain-stiffening of collagen under large deformation. Our results suggest that in future computational studies it may - at least in scenarios similar to the ones studied herein - suffice to use population average values for the other parameters. Moreover, our results suggest that developing methods to measure the said three most influential parameters may be an important step towards reliable patient-specific predictions of the enlargement of abdominal aortic aneurysms in clinical practice.


Energies ◽  
2020 ◽  
Vol 13 (17) ◽  
pp. 4290
Author(s):  
Dongmei Zhang ◽  
Yuyang Zhang ◽  
Bohou Jiang ◽  
Xinwei Jiang ◽  
Zhijiang Kang

Reservoir history matching is a well-known inverse problem for production prediction where enormous uncertain reservoir parameters of a reservoir numerical model are optimized by minimizing the misfit between the simulated and history production data. Gaussian Process (GP) has shown promising performance for assisted history matching due to the efficient nonparametric and nonlinear model with few model parameters to be tuned automatically. Recently introduced Gaussian Processes proxy models and Variogram Analysis of Response Surface-based sensitivity analysis (GP-VARS) uses forward and inverse Gaussian Processes (GP) based proxy models with the VARS-based sensitivity analysis to optimize the high-dimensional reservoir parameters. However, the inverse GP solution (GPIS) in GP-VARS are unsatisfactory especially for enormous reservoir parameters where the mapping from low-dimensional misfits to high-dimensional uncertain reservoir parameters could be poorly modeled by GP. To improve the performance of GP-VARS, in this paper we propose the Gaussian Processes proxy models with Latent Variable Models and VARS-based sensitivity analysis (GPLVM-VARS) where Gaussian Processes Latent Variable Model (GPLVM)-based inverse solution (GPLVMIS) instead of GP-based GPIS is provided with the inputs and outputs of GPIS reversed. The experimental results demonstrate the effectiveness of the proposed GPLVM-VARS in terms of accuracy and complexity. The source code of the proposed GPLVM-VARS is available at https://github.com/XinweiJiang/GPLVM-VARS.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kiyoaki Sugiura ◽  
Yuki Seo ◽  
Takayuki Takahashi ◽  
Hideyuki Tokura ◽  
Yasuhiro Ito ◽  
...  

Abstract Background TAS-102 plus bevacizumab is an anticipated combination regimen for patients who have metastatic colorectal cancer. However, evidence supporting its use for this indication is limited. We compared the cost-effectiveness of TAS-102 plus bevacizumab combination therapy with TAS-102 monotherapy for patients with chemorefractory metastatic colorectal cancer. Method Markov decision modeling using treatment costs, disease-free survival, and overall survival was performed to examine the cost-effectiveness of TAS-102 plus bevacizumab combination therapy and TAS-102 monotherapy. The Japanese health care payer’s perspective was adopted. The outcomes were modeled on the basis of published literature. The incremental cost-effectiveness ratio (ICER) between the two treatment regimens was the primary outcome. Sensitivity analysis was performed and the effect of uncertainty on the model parameters were investigated. Results TAS-102 plus bevacizumab had an ICER of $21,534 per quality-adjusted life-year (QALY) gained compared with TAS-102 monotherapy. Sensitivity analysis demonstrated that TAS-102 monotherapy was more cost-effective than TAS-102 and bevacizumab combination therapy at a willingness-to-pay of under $50,000 per QALY gained. Conclusions TAS-102 and bevacizumab combination therapy is a cost-effective option for patients who have metastatic colorectal cancer in the Japanese health care system.


Sign in / Sign up

Export Citation Format

Share Document