Conducting Sensitivity Analyses to Identify and Buffer Power Vulnerabilities in Studies Examining Substance Use over Time

2018 ◽  
Author(s):  
Sean Patrick Lane ◽  
Erin Hennes

Introduction: A priori power analysis is increasingly being recognized as a useful tool for designing efficient research studies that improve the probability of robust and publishable results. However, power analyses for many empirical designs in the addiction sciences require consideration of numerous parameters. Identifying appropriate parameter estimates is challenging due to multiple sources of uncertainty, which can limit power analyses’ utility. Method: We demonstrate a sensitivity analysis approach for systematically investigating the impact of various model parameters on power. We illustrate this approach using three design aspects of importance for substance use researchers conducting longitudinal studies ─ base rates, individual differences (i.e., random slopes), and correlated predictors (e.g., co-use) ─ and examine how sensitivity analyses can illuminate strategies for controlling power vulnerabilities in such parameters.Results: Even large numbers of participants and/or repeated assessments can be insufficient to observe associations when substance use base rates are too low or too high. Large individual differences can adversely affect power, even with increased assessments. Collinear predictors are rarely detrimental unless the correlation is high.Conclusions: Increasing participants is usually more effective at buffering power than increasing assessments. Research designs can often enhance power by assessing participants twice as frequently as substance use occurs. Heterogeneity should be carefully estimated or empirically controlled, whereas collinearity infrequently impacts power significantly. Sensitivity analyses can identify regions of model parameter spaces that are vulnerable to bad guesses or sampling variability. These insights can be used to design robust studies that make optimal use of limited resources.

2019 ◽  
Author(s):  
David F. Little ◽  
Joel S. Snyder ◽  
Mounya Elhilali

AbstractPerceptual bistability—the spontaneous fluctuation of perception between two interpretations of a stimulus—occurs when observing a large variety of ambiguous stimulus configurations. This phenomenon has the potential to serve as a tool for, among other things, understanding how function varies across individuals due to the large individual differences that manifest during perceptual bistability. Yet it remains difficult to interpret the functional processes at work, without knowing where bistability arises during perception. In this study we explore the hypothesis that bistability originates from multiple sources distributed across the perceptual hierarchy. We develop a hierarchical model of auditory processing comprised of three distinct levels: a Peripheral, tonotopic analysis, a Central analysis computing features found more centrally in the auditory system, and an Object analysis, where sounds are segmented into different streams. We model bistable perception within this system by injecting adaptation, inhibition and noise into one or all of the three levels of the hierarchy. We evaluate a large ensemble of variations of this hierarchical model, where each model has a different configuration of adaptation, inhibition and noise. This approach avoids the assumption that a single configuration must be invoked to explain the data. Each model is evaluated based on its ability to replicate two hallmarks of bistability during auditory streaming: the selectivity of bistability to specific stimulus configurations, and the characteristic log-normal pattern of perceptual switches. Consistent with a distributed origin, a broad range of model parameters across this hierarchy lead to a plausible form of perceptual bistability. The ensemble also appears to predict that greater individual variation in adaptation and inhibition occurs in later stages of perceptual processing.Author summaryOur ability to experience the everyday world through our senses requires that we resolve numerous ambiguities present in the physical evidence available. This is accomplished, in part, through a series of hierarchical computations, in which stimulus interpretations grow increasingly abstract. Our ability to resolve ambiguity does not always succeed, such as during optical illusions. In this study, we examine a form of perceptual ambiguity called bistability—cases in which a single individual’s perception spontaneously switches back and forth between two interpretations of a single stimulus. A challenge in understanding bistability is that we don’t know where along the perceptual hierarchy it is generated. Here we test the idea that there are multiple origins by building a simulation of the auditory system. Consistent with a multi-source account of bistability, this simulation accurately predicts perception of a simple auditory stimulus when bistability originates from a number of different sources within the model. The data also indicate that individual differences during ambiguous perception may primarily originate from higher levels of the perceptual hierarchy. This result provides a clue for future work aiming to determine how auditory function differs across individual brains.


2005 ◽  
Vol 52 (10-11) ◽  
pp. 503-508 ◽  
Author(s):  
K. Chandran ◽  
Z. Hu ◽  
B.F. Smets

Several techniques have been proposed for biokinetic estimation of nitrification. Recently, an extant respirometric assay has been presented that yields kinetic parameters for both nitrification steps with minimal physiological change to the microorganisms during the assay. Herein, the ability of biokinetic parameter estimates from the extant respirometric assay to adequately describe concurrently obtained NH4+-N and NO2−-N substrate depletion profiles is evaluated. Based on our results, in general, the substrate depletion profiles resulted in a higher estimate of the maximum specific growth rate coefficient, μmax for both NH4+-N to NO2−-N oxidation and NO2−-N to NO3−-N oxidation compared to estimates from the extant respirograms. The trends in the kinetic parameter estimates from the different biokinetic estimation techniques are paralleled in the nature of substrate depletion profiles obtained from best-fit parameters. Based on a visual inspection, in general, best-fit parameters from optimally designed complete respirograms provided a better description of the substrate depletion profiles than estimates from isolated respirograms. Nevertheless, the sum of the squared errors for the best-fit respirometry based parameters was outside the 95% joint confidence interval computed for the best-fit substrate depletion based parameters. Notwithstanding the difference in kinetic parameter estimates determined in this study, the different biokinetic estimation techniques still are close to estimates reported in literature. Additional parameter identifiability and sensitivity analysis of parameters from substrate depletion assays revealed high precision of parameters and high parameter correlation. Although biokinetic estimation via automated extant respirometry is far more facile than via manual substrate depletion measurements, additional sensitivity analyses are needed to test the impact of differences in the resulting parameter values on continuous reactor performance.


2014 ◽  
Vol 2014 (1) ◽  
pp. 1113-1125
Author(s):  
Xiaolong Geng ◽  
Michel C. Boufadel

ABSTRACT In April 2010, the explosion of the Deepwater Horizon (DWH) drilling platform led to the release of nearly 4.9 million barrels of crude oil into the Gulf of Mexico. The oil was brought to the supratidal zone of beaches (landward of the high tide line) by waves during storms, and was buried during subsequent storms. The objective of this paper is to investigate the biodegradation of subsurface oil in a tidally influenced sand beach located at Bon Secour National Wildlife Refuge and polluted by the DWH oil spill. Two transects were installed perpendicular to the shoreline within the supratidal zone of the beach. One transect had four galvanized steel piezometer wells to measure the water level. The other transect had four stainless steel multiport sampling wells that were used to collect pore water samples below the beach surface. The samples were analyzed for dissolved oxygen (DO), nitrogen, and redox conditions. Sediment samples were also collected at different depths to measure residual oil concentrations and microbial biomass. As the biodegradation of hydrocarbons was of interest, a biological model based on Monod kinetics was developed and coupled to the transport model MARUN, which is a two dimensional (vertical slice) finite element model for water flow and solute transport in tidally influenced beaches. The resulting coupled model, BIOMARUN, was used to simulate the biodegradation of total n-alkanes and polycyclic aromatic hydrocarbons (PAHs) trapped as residual oil in the unsaturated zone. Model parameter estimates were constrained by published Monod kinetics parameters. The field measurements, such as the concentrations of the oil, microbial biomass, nitrogen, and DO, were used as inputs for the simulations. The biodegradation of alkanes and PAHs was predicted in the simulation, and sensitivity analyses were conducted to assess the effect of the model parameters on the modeling results. Simulation results indicated that n-alkanes and PAHs would be biodegraded by 80% after 2 ± 0.5 years and 3.5 ± 0.5 years, respectively.


2013 ◽  
Vol 17 (12) ◽  
pp. 4995-5011 ◽  
Author(s):  
Y. Sun ◽  
Z. Hou ◽  
M. Huang ◽  
F. Tian ◽  
L. Ruby Leung

Abstract. This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.


BMC Medicine ◽  
2020 ◽  
Vol 18 (1) ◽  
Author(s):  
Kevin van Zandvoort ◽  
◽  
Christopher I. Jarvis ◽  
Carl A. B. Pearson ◽  
Nicholas G. Davies ◽  
...  

Abstract Background The health impact of COVID-19 may differ in African settings as compared to countries in Europe or China due to demographic, epidemiological, environmental and socio-economic factors. We evaluated strategies to reduce SARS-CoV-2 burden in African countries, so as to support decisions that balance minimising mortality, protecting health services and safeguarding livelihoods. Methods We used a Susceptible-Exposed-Infectious-Recovered mathematical model, stratified by age, to predict the evolution of COVID-19 epidemics in three countries representing a range of age distributions in Africa (from oldest to youngest average age: Mauritius, Nigeria and Niger), under various effectiveness assumptions for combinations of different non-pharmaceutical interventions: self-isolation of symptomatic people, physical distancing and ‘shielding’ (physical isolation) of the high-risk population. We adapted model parameters to better represent uncertainty about what might be expected in African populations, in particular by shifting the distribution of severity risk towards younger ages and increasing the case-fatality ratio. We also present sensitivity analyses for key model parameters subject to uncertainty. Results We predicted median symptomatic attack rates over the first 12 months of 23% (Niger) to 42% (Mauritius), peaking at 2–4 months, if epidemics were unmitigated. Self-isolation while symptomatic had a maximum impact of about 30% on reducing severe cases, while the impact of physical distancing varied widely depending on percent contact reduction and R0. The effect of shielding high-risk people, e.g. by rehousing them in physical isolation, was sensitive mainly to residual contact with low-risk people, and to a lesser extent to contact among shielded individuals. Mitigation strategies incorporating self-isolation of symptomatic individuals, moderate physical distancing and high uptake of shielding reduced predicted peak bed demand and mortality by around 50%. Lockdowns delayed epidemics by about 3 months. Estimates were sensitive to differences in age-specific social mixing patterns, as published in the literature, and assumptions on transmissibility, infectiousness of asymptomatic cases and risk of severe disease or death by age. Conclusions In African settings, as elsewhere, current evidence suggests large COVID-19 epidemics are expected. However, African countries have fewer means to suppress transmission and manage cases. We found that self-isolation of symptomatic persons and general physical distancing are unlikely to avert very large epidemics, unless distancing takes the form of stringent lockdown measures. However, both interventions help to mitigate the epidemic. Shielding of high-risk individuals can reduce health service demand and, even more markedly, mortality if it features high uptake and low contact of shielded and unshielded people, with no increase in contact among shielded people. Strategies combining self-isolation, moderate physical distancing and shielding could achieve substantial reductions in mortality in African countries. Temporary lockdowns, where socioeconomically acceptable, can help gain crucial time for planning and expanding health service capacity.


2011 ◽  
Vol 15 (11) ◽  
pp. 3591-3603 ◽  
Author(s):  
R. Singh ◽  
T. Wagener ◽  
K. van Werkhoven ◽  
M. E. Mann ◽  
R. Crane

Abstract. Projecting how future climatic change might impact streamflow is an important challenge for hydrologic science. The common approach to solve this problem is by forcing a hydrologic model, calibrated on historical data or using a priori parameter estimates, with future scenarios of precipitation and temperature. However, several recent studies suggest that the climatic regime of the calibration period is reflected in the resulting parameter estimates and model performance can be negatively impacted if the climate for which projections are made is significantly different from that during calibration. So how can we calibrate a hydrologic model for historically unobserved climatic conditions? To address this issue, we propose a new trading-space-for-time framework that utilizes the similarity between the predictions under change (PUC) and predictions in ungauged basins (PUB) problems. In this new framework we first regionalize climate dependent streamflow characteristics using 394 US watersheds. We then assume that this spatial relationship between climate and streamflow characteristics is similar to the one we would observe between climate and streamflow over long time periods at a single location. This assumption is what we refer to as trading-space-for-time. Therefore, we change the limits for extrapolation to future climatic situations from the restricted locally observed historical variability to the variability observed across all watersheds used to derive the regression relationships. A typical watershed model is subsequently calibrated (conditioned) on the predicted signatures for any future climate scenario to account for the impact of climate on model parameters within a Bayesian framework. As a result, we can obtain ensemble predictions of continuous streamflow at both gauged and ungauged locations. The new method is tested in five US watersheds located in historically different climates using synthetic climate scenarios generated by increasing mean temperature by up to 8 °C and changing mean precipitation by −30% to +40% from their historical values. Depending on the aridity of the watershed, streamflow projections using adjusted parameters became significantly different from those using historically calibrated parameters if precipitation change exceeded −10% or +20%. In general, the trading-space-for-time approach resulted in a stronger watershed response to climate change for both high and low flow conditions.


Processes ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 27 ◽  
Author(s):  
René Schenkendorf ◽  
Xiangzhong Xie ◽  
Moritz Rehbein ◽  
Stephan Scholl ◽  
Ulrike Krewer

In the field of chemical engineering, mathematical models have been proven to be an indispensable tool for process analysis, process design, and condition monitoring. To gain the most benefit from model-based approaches, the implemented mathematical models have to be based on sound principles, and they need to be calibrated to the process under study with suitable model parameter estimates. Often, the model parameters identified by experimental data, however, pose severe uncertainties leading to incorrect or biased inferences. This applies in particular in the field of pharmaceutical manufacturing, where usually the measurement data are limited in quantity and quality when analyzing novel active pharmaceutical ingredients. Optimally designed experiments, in turn, aim to increase the quality of the gathered data in the most efficient way. Any improvement in data quality results in more precise parameter estimates and more reliable model candidates. The applied methods for parameter sensitivity analyses and design criteria are crucial for the effectiveness of the optimal experimental design. In this work, different design measures based on global parameter sensitivities are critically compared with state-of-the-art concepts that follow simplifying linearization principles. The efficient implementation of the proposed sensitivity measures is explicitly addressed to be applicable to complex chemical engineering problems of practical relevance. As a case study, the homogeneous synthesis of 3,4-dihydro-1H-1-benzazepine-2,5-dione, a scaffold for the preparation of various protein kinase inhibitors, is analyzed followed by a more complex model of biochemical reactions. In both studies, the model-based optimal experimental design benefits from global parameter sensitivities combined with proper design measures.


2019 ◽  
Vol 29 (4) ◽  
pp. 480-495
Author(s):  
Olga G. Kantor ◽  
Semen I. Spivak ◽  
Nikolay D. Morozkin

Introduction. The model of a given structure should be identified based on the results of solving the problem of parametric identification. This model should provide the best possible the database development reproduction of the experimental data. The concept of “best” is not strictly structured. Therefore, the procedure for identifying such a model is subject to natural logic and includes the stages of data a determination of a set of acceptable models and subsequent selection of the best of them. If the set of acceptable models is large, the procedure for determining the best one can be time-consuming. In this regard, the development of methods for parametric identification, which at the stage of creating a set of acceptable models allows taking into account the qualitative aspects of the identified dependence, which are of interest to the researcher, is of particular importance. Materials and Methods. The set of acceptable methods in the problems of parametric identification largely depends on the type of the experimental data. Uncertainty for example, probabilistic and statistical methods are useful if the observed factors are random and subject to any law of probability distribution. If the conditions for the use of such methods are not met, it may be useful to present an approach based on identifying the boundaries of location of the model parameters that ensure the achievement of specified levels of quality characteristics. Results. The procedure of parametric identification of models is formalized. It is based on the use of maximum permissible parameter estimates and allows one to determining the set of parameter values that guarantee the achievement of the required qualitative level of experimental data description, including from the standpoint of analyzing the impact of changes in accord with requirements to the accuracy of their reproduction. The approbation of the developed method on the example of the construction of a one-factor model of chemical kinetics is presented. Discussion and Conclusion. It is shown that the obtained value of the chemical reaction rate constant, in accordance with the introduced criteria, provides acceptable accuracy, adequacy, and stability of the identified kinetic model. At the same time, the results of calculations revealed the information that can form the basis for planning experiments carried out in order to improve the accuracy of the experimental data.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sima Azizi ◽  
Daniel B. Hier ◽  
Blaine Allen ◽  
Tayo Obafemi-Ajayi ◽  
Gayla R. Olbricht ◽  
...  

Traumatic brain injury (TBI) imposes a significant economic and social burden. The diagnosis and prognosis of mild TBI, also called concussion, is challenging. Concussions are common among contact sport athletes. After a blow to the head, it is often difficult to determine who has had a concussion, who should be withheld from play, if a concussed athlete is ready to return to the field, and which concussed athlete will develop a post-concussion syndrome. Biomarkers can be detected in the cerebrospinal fluid and blood after traumatic brain injury and their levels may have prognostic value. Despite significant investigation, questions remain as to the trajectories of blood biomarker levels over time after mild TBI. Modeling the kinetic behavior of these biomarkers could be informative. We propose a one-compartment kinetic model for S100B, UCH-L1, NF-L, GFAP, and tau biomarker levels after mild TBI based on accepted pharmacokinetic models for oral drug absorption. We approximated model parameters using previously published studies. Since parameter estimates were approximate, we did uncertainty and sensitivity analyses. Using estimated kinetic parameters for each biomarker, we applied the model to an available post-concussion biomarker dataset of UCH-L1, GFAP, tau, and NF-L biomarkers levels. We have demonstrated the feasibility of modeling blood biomarker levels after mild TBI with a one compartment kinetic model. More work is needed to better establish model parameters and to understand the implications of the model for diagnostic use of these blood biomarkers for mild TBI.


2018 ◽  
Vol 146 (4) ◽  
pp. 496-507 ◽  
Author(s):  
D. B. C. Wu ◽  
N. Chaiyakunapruk ◽  
C. Pratoomsoot ◽  
K. K. C. Lee ◽  
H. Y. Chong ◽  
...  

AbstractSimulation models are used widely in pharmacology, epidemiology and health economics (HEs). However, there have been no attempts to incorporate models from these disciplines into a single integrated model. Accordingly, we explored this linkage to evaluate the epidemiological and economic impact of oseltamivir dose optimisation in supporting pandemic influenza planning in the USA. An HE decision analytic model was linked to a pharmacokinetic/pharmacodynamics (PK/PD) – dynamic transmission model simulating the impact of pandemic influenza with low virulence and low transmissibility and, high virulence and high transmissibility. The cost-utility analysis was from the payer and societal perspectives, comparing oseltamivir 75 and 150 mg twice daily (BID) to no treatment over a 1-year time horizon. Model parameters were derived from published studies. Outcomes were measured as cost per quality-adjusted life year (QALY) gained. Sensitivity analyses were performed to examine the integrated model's robustness. Under both pandemic scenarios, compared to no treatment, the use of oseltamivir 75 or 150 mg BID led to a significant reduction of influenza episodes and influenza-related deaths, translating to substantial savings of QALYs. Overall drug costs were offset by the reduction of both direct and indirect costs, making these two interventions cost-saving from both perspectives. The results were sensitive to the proportion of inpatient presentation at the emergency visit and patients’ quality of life. Integrating PK/PD–EPI/HE models is achievable. Whilst further refinement of this novel linkage model to more closely mimic the reality is needed, the current study has generated useful insights to support influenza pandemic planning.


Sign in / Sign up

Export Citation Format

Share Document