scholarly journals Optimality-Based Non-Redfield Plankton-Ecosystem Model (OPEMv1.0) in the UVic-ESCM 2.9. Part II: Sensitivity Analysis and Model Calibration

2020 ◽  
Author(s):  
Chia-Te Chien ◽  
Markus Pahlow ◽  
Markus Schartau ◽  
Andreas Oschlies

Abstract. We analyse 400 perturbed-parameter simulations for two configurations of an optimality-based plankton-ecosystem model (OPEM), implemented in the University of Victoria Earth-System Climate Model (UVic-ESCM), using a Latin-Hypercube sampling method for setting up the parameter ensemble. A likelihood-based metric is introduced for model assessment and selection of the model solutions closest to observed distributions of NO3−, PO43−, O2, and surface chlorophyll a concentrations. According to our metric the optimal model solutions comprise low rates of global N2 fixation and denitrification. These two rate estimates turned out to be poorly constrained by the data. For identifying the “best” model solutions we therefore also consider the model’s ability to represent current estimates of water-column denitrification. We employ our ensemble of model solutions in a sensitivity analysis to gain insights into the importance and role of individual model parameters as well as correlations between various biogeochemical processes and tracers, such as POC export and the NO3− inventory. Global O2 varies by a factor of two and NO3− by more than a factor of six among all simulations. Remineralisation rate is the most important parameter for O2, which is also affected by the subsistence N quota of ordinary phytoplankton (QN0,phy) and zooplankton maximum specific ingestion rate. QN0,phy is revealed as a major determinant of the oceanic NO3− pool. This indicates that unraveling the driving forces of variations in phytoplankton physiology and elemental stoichiometry, which are tightly linked via QN0,phy, is a prerequisite for understanding the marine nitrogen inventory.

2020 ◽  
Vol 13 (10) ◽  
pp. 4691-4712
Author(s):  
Chia-Te Chien ◽  
Markus Pahlow ◽  
Markus Schartau ◽  
Andreas Oschlies

Abstract. We analyse 400 perturbed-parameter simulations for two configurations of an optimality-based plankton–ecosystem model (OPEM), implemented in the University of Victoria Earth System Climate Model (UVic-ESCM), using a Latin hypercube sampling method for setting up the parameter ensemble. A likelihood-based metric is introduced for model assessment and selection of the model solutions closest to observed distributions of NO3-, PO43-, O2, and surface chlorophyll a concentrations. The simulations closest to the data with respect to our metric exhibit very low rates of global N2 fixation and denitrification, indicating that in order to achieve rates consistent with independent estimates, additional constraints have to be applied in the calibration process. For identifying the reference parameter sets, we therefore also consider the model's ability to represent current estimates of water-column denitrification. We employ our ensemble of model solutions in a sensitivity analysis to gain insights into the importance and role of individual model parameters as well as correlations between various biogeochemical processes and tracers, such as POC export and the NO3- inventory. Global O2 varies by a factor of 2 and NO3- by more than a factor of 6 among all simulations. Remineralisation rate is the most important parameter for O2, which is also affected by the subsistence N quota of ordinary phytoplankton (Q0,phyN) and zooplankton maximum specific ingestion rate. Q0,phyN is revealed as a major determinant of the oceanic NO3- pool. This indicates that unravelling the driving forces of variations in phytoplankton physiology and elemental stoichiometry, which are tightly linked via Q0,phyN, is a prerequisite for understanding the marine nitrogen inventory.


1998 ◽  
Vol 84 (6) ◽  
pp. 2070-2088 ◽  
Author(s):  
Thien D. Bui ◽  
Donald Dabdub ◽  
Steven C. George

The steady-state exchange of inert gases across an in situ canine trachea has recently been shown to be limited equally by diffusion and perfusion over a wide range (0.01–350) of blood solubilities (βblood; ml ⋅ ml−1 ⋅ atm−1). Hence, we hypothesize that the exchange of ethanol (βblood = 1,756 at 37°C) in the airways depends on the blood flow rate from the bronchial circulation. To test this hypothesis, the dynamics of the bronchial circulation were incorporated into an existing model that describes the simultaneous exchange of heat, water, and a soluble gas in the airways. A detailed sensitivity analysis of key model parameters was performed by using the method of Latin hypercube sampling. The model accurately predicted a previously reported experimental exhalation profile of ethanol ( R 2= 0.991) as well as the end-exhalation airstream temperature (34.6°C). The model predicts that 27, 29, and 44% of exhaled ethanol in a single exhalation are derived from the tissues of the mucosa and submucosa, the bronchial circulation, and the tissue exterior to the submucosa (which would include the pulmonary circulation), respectively. Although the concentration of ethanol in the bronchial capillary decreased during inspiration, the three key model outputs (end-exhaled ethanol concentration, the slope of phase III, and end-exhaled temperature) were all statistically insensitive ( P > 0.05) to the parameters describing the bronchial circulation. In contrast, the model outputs were all sensitive ( P < 0.05) to the thickness of tissue separating the core body conditions from the bronchial smooth muscle. We conclude that both the bronchial circulation and the pulmonary circulation impact soluble gas exchange when the entire conducting airway tree is considered.


2020 ◽  
Vol 148 (7) ◽  
pp. 2997-3014
Author(s):  
Caren Marzban ◽  
Robert Tardif ◽  
Scott Sandgathe

Abstract A sensitivity analysis methodology recently developed by the authors is applied to COAMPS and WRF. The method involves varying model parameters according to Latin Hypercube Sampling, and developing multivariate multiple regression models that map the model parameters to forecasts over a spatial domain. The regression coefficients and p values testing whether the coefficients are zero serve as measures of sensitivity of forecasts with respect to model parameters. Nine model parameters are selected from COAMPS and WRF, and their impact is examined on nine forecast quantities (water vapor, convective and gridscale precipitation, and air temperature and wind speed at three altitudes). Although the conclusions depend on the model parameters and specific forecast quantities, it is shown that sensitivity to model parameters is often accompanied by nontrivial spatial structure, which itself depends on the underlying forecast model (i.e., COAMPS vs WRF). One specific difference between these models is in their sensitivity with respect to a parameter that controls temperature increments in the Kain–Fritsch trigger function; whereas this parameter has a distinct spatial structure in COAMPS, that structure is completely absent in WRF. The differences between COAMPS and WRF also extend to the quality of the statistical models used to assess sensitivity; specifically, the differences are largest over the waters off the southeastern coast of the United States. The implication of these findings is twofold: not only is the spatial structure of sensitivities different between COAMPS and WRF, the underlying relationship between the model parameters and the forecasts is also different between the two models.


2006 ◽  
Vol 8 (3) ◽  
pp. 223-234 ◽  
Author(s):  
Husam Baalousha

Characterisation of groundwater modelling involves significant uncertainty because of estimation errors of these models and other different sources of uncertainty. Deterministic models do not account for uncertainties in model parameters, and thus lead to doubtful output. The main alternatives for deterministic models are the probabilistic models and perturbation methods such as Monte Carlo Simulation (MCS). Unfortunately, these methods have many drawbacks when applied in risk analysis of groundwater pollution. In this paper, a modified Latin Hypercube Sampling method is presented and used for risk, uncertainty, and sensitivity analysis of groundwater pollution. The obtained results were compared with other sampling methods. Results of the proposed method have shown that it can predict the groundwater contamination risk for all values of probability better than other methods, maintaining the accuracy of mean estimation. Sensitivity analysis results reveal that the contaminant concentration is more sensitive to longitudinal dispersivity than to velocity.


Atmosphere ◽  
2018 ◽  
Vol 9 (8) ◽  
pp. 296 ◽  
Author(s):  
Adam Kochanski ◽  
Aimé Fournier ◽  
Jan Mandel

Observational data collected during experiments, such as the planned Fire and Smoke Model Evaluation Experiment (FASMEE), are critical for evaluating and transitioning coupled fire-atmosphere models like WRF-SFIRE and WRF-SFIRE-CHEM into operational use. Historical meteorological data, representing typical weather conditions for the anticipated burn locations and times, have been processed to initialize and run a set of simulations representing the planned experimental burns. Based on an analysis of these numerical simulations, this paper provides recommendations on the experimental setup such as size and duration of the burns, and optimal sensor placement. New techniques are developed to initialize coupled fire-atmosphere simulations with weather conditions typical of the planned burn locations and times. The variation and sensitivity analysis of the simulation design to model parameters performed by repeated Latin Hypercube Sampling is used to assess the locations of the sensors. The simulations provide the locations for the measurements that maximize the expected variation of the sensor outputs with varying the model parameters.


2013 ◽  
Vol 10 (86) ◽  
pp. 20121018 ◽  
Author(s):  
Jianyong Wu ◽  
Radhika Dhingra ◽  
Manoj Gambhir ◽  
Justin V. Remais

Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
J. B. H. Njagarah ◽  
F. Nyabadza

A mathematical model for the dynamics of cholera transmission with permissible controls between two connected communities is developed and analysed. The dynamics of the disease in the adjacent communities are assumed to be similar, with the main differences only reflected in the transmission and disease related parameters. This assumption is based on the fact that adjacent communities often have different living conditions and movement is inclined toward the community with better living conditions. Community specific reproduction numbers are given assuming movement of those susceptible, infected, and recovered, between communities. We carry out sensitivity analysis of the model parameters using the Latin Hypercube Sampling scheme to ascertain the degree of effect the parameters and controls have on progression of the infection. Using principles from optimal control theory, a temporal relationship between the distribution of controls and severity of the infection is ascertained. Our results indicate that implementation of controls such as proper hygiene, sanitation, and vaccination across both affected communities is likely to annihilate the infection within half the time it would take through self-limitation. In addition, although an infection may still break out in the presence of controls, it may be up to 8 times less devastating when compared with the case when no controls are in place.


2018 ◽  
Vol 16 (1) ◽  
pp. 125-134
Author(s):  
Nikola Velimirovic ◽  
Dragoslav Stojic

The sensitivity analysis could be defined as a study of how the variability of the output parameter of the considered model can be distributed to its sources, actually, on the variability of the various input model parameters. It helps to identify the most important design parameters of a particular structure and to focus on them during the design and optimization process. This paper is focused on the application of stochastic sensitivity analysis of maximum equivalent stress and maximum mid-span deflection of timber-concrete composite beam. All input parameters were considered to be random variables. Latin Hypercube Sampling numerical simulation method was employed. The estimation of the sensitivity was derived from Spearman rank-order correlation coefficient.


Sensitivity analysis is a widely applied tool used to investigate the predictions of numerical models of environmental processes. This paper illustrates the importance of undertaking a sensitivity analysis that considers the spatially distributed nature of model predictions, rather than simply assessing the sensitivity of one or more model parameters that are assumed to represent the distributed behaviour of the system. A spatially distributed sensitivity analysis is applied to the output from a distributed model of turbulent river flow, used to simulate the flow processes in a natural river channel bifurcation. Examples are provided for an input parameter which illustrates the importance of sensitivity analysis with respect to model assessment and error analysis. The distributed nature of the analysis suggests the importance of spatial feedback in environmental systems that more traditional approaches to sensitivity analysis cannot reveal.


2021 ◽  
Author(s):  
Petr Pecha ◽  
Miroslav Kárný

Abstract During several hours of the calm meteorological situation, a relatively significant level of radioactivity can be accumulated around the source. At the second stage, the calm situation is assumed to terminate and convective movement of the air induced by wind immediately starts. Random realisations of the input atmospheric dispersion model parameters for the CALM scenario are generated using LHS (Latin Hypercube Sampling) scheme. The resultant complex random radiological trajectories passing through both calm and convective stages of the release scenario represent inevitable prerequisite for prospective uncertainty analysis (UA) and sensitivity analysis (SA). Novel concept of Approximate Based (AB) solution approximates non-Gaussian sum of individual puffs at the end of the calm period by only one Gaussian “superpuff” distribution. Substantial acceleration of generation of sufficiently large number of random realisation makes further UA and SA feasible. Both procedures come from common mapping of the pairs of matrix random dependant output fields and vector of random input parameters realisations. Examples of 2-D random trajectories of deposited 137Cs are presented. Global sensitivity analysis utilising random sampling methods is outlined.


Sign in / Sign up

Export Citation Format

Share Document