Modeling bronchial circulation with application to soluble gas exchange: description and sensitivity analysis

1998 ◽  
Vol 84 (6) ◽  
pp. 2070-2088 ◽  
Author(s):  
Thien D. Bui ◽  
Donald Dabdub ◽  
Steven C. George

The steady-state exchange of inert gases across an in situ canine trachea has recently been shown to be limited equally by diffusion and perfusion over a wide range (0.01–350) of blood solubilities (βblood; ml ⋅ ml−1 ⋅ atm−1). Hence, we hypothesize that the exchange of ethanol (βblood = 1,756 at 37°C) in the airways depends on the blood flow rate from the bronchial circulation. To test this hypothesis, the dynamics of the bronchial circulation were incorporated into an existing model that describes the simultaneous exchange of heat, water, and a soluble gas in the airways. A detailed sensitivity analysis of key model parameters was performed by using the method of Latin hypercube sampling. The model accurately predicted a previously reported experimental exhalation profile of ethanol ( R 2= 0.991) as well as the end-exhalation airstream temperature (34.6°C). The model predicts that 27, 29, and 44% of exhaled ethanol in a single exhalation are derived from the tissues of the mucosa and submucosa, the bronchial circulation, and the tissue exterior to the submucosa (which would include the pulmonary circulation), respectively. Although the concentration of ethanol in the bronchial capillary decreased during inspiration, the three key model outputs (end-exhaled ethanol concentration, the slope of phase III, and end-exhaled temperature) were all statistically insensitive ( P > 0.05) to the parameters describing the bronchial circulation. In contrast, the model outputs were all sensitive ( P < 0.05) to the thickness of tissue separating the core body conditions from the bronchial smooth muscle. We conclude that both the bronchial circulation and the pulmonary circulation impact soluble gas exchange when the entire conducting airway tree is considered.

2020 ◽  
Vol 148 (7) ◽  
pp. 2997-3014
Author(s):  
Caren Marzban ◽  
Robert Tardif ◽  
Scott Sandgathe

Abstract A sensitivity analysis methodology recently developed by the authors is applied to COAMPS and WRF. The method involves varying model parameters according to Latin Hypercube Sampling, and developing multivariate multiple regression models that map the model parameters to forecasts over a spatial domain. The regression coefficients and p values testing whether the coefficients are zero serve as measures of sensitivity of forecasts with respect to model parameters. Nine model parameters are selected from COAMPS and WRF, and their impact is examined on nine forecast quantities (water vapor, convective and gridscale precipitation, and air temperature and wind speed at three altitudes). Although the conclusions depend on the model parameters and specific forecast quantities, it is shown that sensitivity to model parameters is often accompanied by nontrivial spatial structure, which itself depends on the underlying forecast model (i.e., COAMPS vs WRF). One specific difference between these models is in their sensitivity with respect to a parameter that controls temperature increments in the Kain–Fritsch trigger function; whereas this parameter has a distinct spatial structure in COAMPS, that structure is completely absent in WRF. The differences between COAMPS and WRF also extend to the quality of the statistical models used to assess sensitivity; specifically, the differences are largest over the waters off the southeastern coast of the United States. The implication of these findings is twofold: not only is the spatial structure of sensitivities different between COAMPS and WRF, the underlying relationship between the model parameters and the forecasts is also different between the two models.


2007 ◽  
Vol 7 (1) ◽  
pp. 15-30 ◽  
Author(s):  
D. Helmig ◽  
L. Ganzeveld ◽  
T. Butler ◽  
S. J. Oltmans

Abstract. Recent research on snowpack processes and atmosphere-snow gas exchange has demonstrated that chemical and physical interactions between the snowpack and the overlaying atmosphere have a substantial impact on the composition of the lower troposphere. These observations also imply that ozone deposition to the snowpack possibly depends on parameters including the quantity and composition of deposited trace gases, solar irradiance, snow temperature and the substrate below the snowpack. Current literature spans a remarkably wide range of ozone deposition velocities (vdO3); several studies even reported positive ozone fluxes out of the snow. Overall, published values range from ~–3<vdO3<2 cm s−1, although most data are within 0<vdO3<0.2 cm s−1. This literature reveals a high uncertainty in the parameterization and the magnitude of ozone fluxes into (and possibly out of) snow-covered landscapes. In this study a chemistry and tracer transport model was applied to evaluate the applicability of the published vdO3 and to investigate the sensitivity of tropospheric ozone towards ozone deposition over Northern Hemisphere snow-covered land and sea-ice. Model calculations using increasing vdO3 of 0.0, 0.01, 0.05 and 0.10 cm s−1 resulted in general ozone sensitivities up to 20–30% in the Arctic surface layer, and of up to 130% local increases in selected Northern Latitude regions. The simulated ozone concentrations were compared with mean January ozone observations from 18 Arctic stations. Best agreement between the model and observations, not only in terms of absolute concentrations but also in the hourly ozone variability, was found by applying an ozone deposition velocity in the range of 0.00–0.01 cm s−1, which is smaller than most literature data and also significantly lower compared to the value of 0.05 cm s−1 that is commonly applied in large-scale atmospheric chemistry models. This sensitivity analysis demonstrates that large errors in the description of the wintertime tropospheric ozone budget stem from the uncertain magnitude of ozone deposition rates and the inability to properly parameterize ozone fluxes to snow-covered landscapes.


2020 ◽  
Vol 13 (10) ◽  
pp. 4691-4712
Author(s):  
Chia-Te Chien ◽  
Markus Pahlow ◽  
Markus Schartau ◽  
Andreas Oschlies

Abstract. We analyse 400 perturbed-parameter simulations for two configurations of an optimality-based plankton–ecosystem model (OPEM), implemented in the University of Victoria Earth System Climate Model (UVic-ESCM), using a Latin hypercube sampling method for setting up the parameter ensemble. A likelihood-based metric is introduced for model assessment and selection of the model solutions closest to observed distributions of NO3-, PO43-, O2, and surface chlorophyll a concentrations. The simulations closest to the data with respect to our metric exhibit very low rates of global N2 fixation and denitrification, indicating that in order to achieve rates consistent with independent estimates, additional constraints have to be applied in the calibration process. For identifying the reference parameter sets, we therefore also consider the model's ability to represent current estimates of water-column denitrification. We employ our ensemble of model solutions in a sensitivity analysis to gain insights into the importance and role of individual model parameters as well as correlations between various biogeochemical processes and tracers, such as POC export and the NO3- inventory. Global O2 varies by a factor of 2 and NO3- by more than a factor of 6 among all simulations. Remineralisation rate is the most important parameter for O2, which is also affected by the subsistence N quota of ordinary phytoplankton (Q0,phyN) and zooplankton maximum specific ingestion rate. Q0,phyN is revealed as a major determinant of the oceanic NO3- pool. This indicates that unravelling the driving forces of variations in phytoplankton physiology and elemental stoichiometry, which are tightly linked via Q0,phyN, is a prerequisite for understanding the marine nitrogen inventory.


2006 ◽  
Vol 8 (3) ◽  
pp. 223-234 ◽  
Author(s):  
Husam Baalousha

Characterisation of groundwater modelling involves significant uncertainty because of estimation errors of these models and other different sources of uncertainty. Deterministic models do not account for uncertainties in model parameters, and thus lead to doubtful output. The main alternatives for deterministic models are the probabilistic models and perturbation methods such as Monte Carlo Simulation (MCS). Unfortunately, these methods have many drawbacks when applied in risk analysis of groundwater pollution. In this paper, a modified Latin Hypercube Sampling method is presented and used for risk, uncertainty, and sensitivity analysis of groundwater pollution. The obtained results were compared with other sampling methods. Results of the proposed method have shown that it can predict the groundwater contamination risk for all values of probability better than other methods, maintaining the accuracy of mean estimation. Sensitivity analysis results reveal that the contaminant concentration is more sensitive to longitudinal dispersivity than to velocity.


2020 ◽  
Author(s):  
Chia-Te Chien ◽  
Markus Pahlow ◽  
Markus Schartau ◽  
Andreas Oschlies

Abstract. We analyse 400 perturbed-parameter simulations for two configurations of an optimality-based plankton-ecosystem model (OPEM), implemented in the University of Victoria Earth-System Climate Model (UVic-ESCM), using a Latin-Hypercube sampling method for setting up the parameter ensemble. A likelihood-based metric is introduced for model assessment and selection of the model solutions closest to observed distributions of NO3−, PO43−, O2, and surface chlorophyll a concentrations. According to our metric the optimal model solutions comprise low rates of global N2 fixation and denitrification. These two rate estimates turned out to be poorly constrained by the data. For identifying the “best” model solutions we therefore also consider the model’s ability to represent current estimates of water-column denitrification. We employ our ensemble of model solutions in a sensitivity analysis to gain insights into the importance and role of individual model parameters as well as correlations between various biogeochemical processes and tracers, such as POC export and the NO3− inventory. Global O2 varies by a factor of two and NO3− by more than a factor of six among all simulations. Remineralisation rate is the most important parameter for O2, which is also affected by the subsistence N quota of ordinary phytoplankton (QN0,phy) and zooplankton maximum specific ingestion rate. QN0,phy is revealed as a major determinant of the oceanic NO3− pool. This indicates that unraveling the driving forces of variations in phytoplankton physiology and elemental stoichiometry, which are tightly linked via QN0,phy, is a prerequisite for understanding the marine nitrogen inventory.


2004 ◽  
Vol 97 (5) ◽  
pp. 1702-1708 ◽  
Author(s):  
Carmel Schimmel ◽  
Susan L. Bernard ◽  
Joseph C. Anderson ◽  
Nayak L. Polissar ◽  
S. Lakshminarayan ◽  
...  

We studied the airway gas exchange properties of five inert gases with different blood solubilities in the lungs of anesthetized sheep. Animals were ventilated through a bifurcated endobronchial tube to allow independent ventilation and collection of exhaled gases from each lung. An aortic pouch at the origin of the bronchial artery was created to control perfusion and enable infusion of a solution of inert gases into the bronchial circulation. Occlusion of the left pulmonary artery prevented pulmonary perfusion of that lung so that gas exchange occurred predominantly via the bronchial circulation. Excretion from the bronchial circulation (defined as the partial pressure of gas in exhaled gas divided by the partial pressure of gas in bronchial arterial blood) increased with increasing gas solubility (ranging from a mean of 4.2 × 10−5 for SF6 to 4.8 × 10−2 for ether) and increasing bronchial blood flow. Excretion was inversely affected by molecular weight (MW), demonstrating a dependence on diffusion. Excretions of the higher MW gases, halothane (MW = 194) and SF6 (MW = 146), were depressed relative to excretion of the lower MW gases ethane, cyclopropane, and ether (MW = 30, 42, 74, respectively). All results were consistent with previous studies of gas exchange in the isolated in situ trachea.


2019 ◽  
Vol 5 (12) ◽  
pp. 2738-2746
Author(s):  
Abdul Ghani Soomro ◽  
Muhammad Munir Babar ◽  
Anila Hameem Memon ◽  
Arjumand Zehra Zaidi ◽  
Arshad Ashraf ◽  
...  

This study explores the impact of runoff curve number (CN) on the hydrological model outputs for the Morai watershed, Sindh-Pakistan, using the Soil Conservation Service Curve Number (SCS-CN) method. The SCS-CN method is an empirical technique used to estimate rainfall-runoff volume from precipitation in small watersheds, and CN is an empirically derived parameter used to calculate direct runoff from a rainfall event. CN depends on soil type, its condition, and the land use and land cover (LULC) of an area. Precise knowledge of these factors was not available for the study area, and therefore, a range of values was selected to analyze the sensitivity of the model to the changing CN values. Sensitivity analysis involves a methodological manipulation of model parameters to understand their impacts on model outputs. A range of CN values from 40-90 was selected to determine their effects on model results at the sub-catchment level during the historic flood year of 2010. The model simulated 362 cumecs of peak discharge for CN=90; however, for CN=40, the discharge reduced substantially to 78 cumecs (a 78.46% reduction). Event-based comparison of water volumes for different groups of CN values—90-75, 80-75, 75-70, and 90-40 —showed reductions in water availability of 8.88%, 3.39%, 3.82%, and 41.81%, respectively. Although it is known that the higher the CN, the greater the discharge from direct runoff and the less initial losses, the sensitivity analysis quantifies that impact and determines the amount of associated discharges with changing CN values. The results of the case study suggest that CN is one of the most influential parameters in the simulation of direct runoff. Knowledge of accurate runoff is important in both wet (flood management) and dry periods (water availability). A wide range in the resulting water discharges highlights the importance of precise CN selection. Sensitivity analysis is an essential facet of establishing hydrological models in limited data watersheds. The range of CNs demonstrates an enormous quantitative consequence on direct runoff, the exactness of which is necessary for effective water resource planning and management. The method itself is not novel, but the way it is proposed here can justify investments in determining the accurate CN before initiating mega projects involving rainfall-runoff simulations. Even a small error in CN value may lead to serious consequences. In the current study, the sensitivity analysis challenges the strength of the results of a model in the presence of ambiguity regarding CN value.


Atmosphere ◽  
2018 ◽  
Vol 9 (8) ◽  
pp. 296 ◽  
Author(s):  
Adam Kochanski ◽  
Aimé Fournier ◽  
Jan Mandel

Observational data collected during experiments, such as the planned Fire and Smoke Model Evaluation Experiment (FASMEE), are critical for evaluating and transitioning coupled fire-atmosphere models like WRF-SFIRE and WRF-SFIRE-CHEM into operational use. Historical meteorological data, representing typical weather conditions for the anticipated burn locations and times, have been processed to initialize and run a set of simulations representing the planned experimental burns. Based on an analysis of these numerical simulations, this paper provides recommendations on the experimental setup such as size and duration of the burns, and optimal sensor placement. New techniques are developed to initialize coupled fire-atmosphere simulations with weather conditions typical of the planned burn locations and times. The variation and sensitivity analysis of the simulation design to model parameters performed by repeated Latin Hypercube Sampling is used to assess the locations of the sensors. The simulations provide the locations for the measurements that maximize the expected variation of the sensor outputs with varying the model parameters.


2013 ◽  
Vol 141 (11) ◽  
pp. 4069-4079 ◽  
Author(s):  
Caren Marzban

Abstract Sensitivity analysis (SA) generally refers to an assessment of the sensitivity of the output(s) of some complex model with respect to changes in the input(s). Examples of inputs or outputs include initial state variables, parameters of a numerical model, or state variables at some future time. Sensitivity analysis is useful for data assimilation, model tuning, calibration, and dimensionality reduction; and there exists a wide range of SA techniques for each. This paper discusses one special class of SA techniques, referred to as variance based. As a first step in demonstrating the utility of the method in understanding the relationship between forecasts and parameters of complex numerical models, here the method is applied to the Lorenz'63 model, and the results are compared with an adjoint-based approach to SA. The method has three major components: 1) analysis of variance, 2) emulation of computer data, and 3) experimental–sampling design. The role of these three topics in variance-based SA is addressed in generality. More specifically, the application to the Lorenz'63 model suggests that the Z state variable is most sensitive to the b and r parameters, and is mostly unaffected by the s parameter. There is also evidence for an interaction between the r and b parameters. It is shown that these conclusions are true for both simple random sampling and Latin hypercube sampling, although the latter leads to slightly more precise estimates for some of the sensitivity measures.


2006 ◽  
Vol 6 (1) ◽  
pp. 755-794
Author(s):  
D. Helmig ◽  
L. Ganzeveld ◽  
T. Butler ◽  
S. J. Oltmans

Abstract. Recent research on snowpack processes and atmosphere-snow gas exchange has demonstrated that chemical and physical interactions between the snowpack and the overlaying atmosphere have a substantial impact on the composition of the lower troposphere. These observations also imply that ozone deposition to the snowpack possibly depends on parameters including the quantity and composition of deposited trace gases, solar irradiance, snow temperature and the substrate below the snowpack. Current literature spans a remarkably wide range of ozone deposition velocities (vdO3); several studies even reported positive ozone fluxes out of the snow. Overall, published values range from ~−3<vdO3<2 cm s-1, though most data are within ~0<vdO3<0.2 cm s-1. These literature reveal a high uncertainty in the parameterization and the magnitude of ozone fluxes into (and possibly out of) snow-covered landscapes. In this study a chemistry and tracer transport model was applied to investigate the sensitivity of tropospheric ozone towards ozone deposition over Northern Hemisphere snow-covered land and sea-ice. Model calculations using increasing vdO3 of 0.0, 0.01, 0.05 and 0.10 cm s-1 resulted in general ozone sensitivities up to 20–30% in the Arctic surface layer, and of up to 130% local increases in selected Northern Latitude regions. The simulated ozone concentrations were compared with mean January ozone observations from 18 Arctic stations. Best agreement between the model and observations, not only in terms of absolute concentrations but also in the hourly ozone variability, was found by applying an ozone deposition velocity in the range of 0.00–0.01 cm s-1, which is smaller than most literature data and also significantly lower compared to the value of 0.05 cm s-1 that is commonly applied in large-scale atmospheric chemistry models. This sensitivity analysis demonstrates that large errors in the description of the wintertime tropospheric ozone budget stem from the uncertain magnitude of ozone deposition rates and the inability to properly parameterize ozone fluxes to snow-covered landscapes.


Sign in / Sign up

Export Citation Format

Share Document