scholarly journals Estimation of viral kinetics model parameters in young and aged SARS-CoV-2 infected macaques

2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Thalia Rodriguez ◽  
Hana M. Dobrovolny

The SARS-CoV-2 virus disproportionately causes serious illness and death in older individuals. In order to have the greatest impact in decreasing the human toll caused by the virus, antiviral treatment should be targeted to older patients. For this, we need a better understanding of the differences in viral dynamics between SARS-CoV-2 infection in younger and older adults. In this study, we use previously published averaged viral titre measurements from the nose and throat of SARS-CoV-2 infection in young and aged cynomolgus macaques to parametrize a viral kinetics model. We find that all viral kinetics parameters differ between young and aged macaques in the nasal passages, but that there are fewer differences in parameter estimates from the throat. We further use our parametrized model to study the antiviral treatment of young and aged animals, finding that early antiviral treatment is more likely to lead to a lengthening of the infection in aged animals, but not in young animals.

2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


2011 ◽  
Vol 64 (S1) ◽  
pp. S3-S18 ◽  
Author(s):  
Yuanxi Yang ◽  
Jinlong Li ◽  
Junyi Xu ◽  
Jing Tang

Integrated navigation using multiple Global Navigation Satellite Systems (GNSS) is beneficial to increase the number of observable satellites, alleviate the effects of systematic errors and improve the accuracy of positioning, navigation and timing (PNT). When multiple constellations and multiple frequency measurements are employed, the functional and stochastic models as well as the estimation principle for PNT may be different. Therefore, the commonly used definition of “dilution of precision (DOP)” based on the least squares (LS) estimation and unified functional and stochastic models will be not applicable anymore. In this paper, three types of generalised DOPs are defined. The first type of generalised DOP is based on the error influence function (IF) of pseudo-ranges that reflects the geometry strength of the measurements, error magnitude and the estimation risk criteria. When the least squares estimation is used, the first type of generalised DOP is identical to the one commonly used. In order to define the first type of generalised DOP, an IF of signal–in-space (SIS) errors on the parameter estimates of PNT is derived. The second type of generalised DOP is defined based on the functional model with additional systematic parameters induced by the compatibility and interoperability problems among different GNSS systems. The third type of generalised DOP is defined based on Bayesian estimation in which the a priori information of the model parameters is taken into account. This is suitable for evaluating the precision of kinematic positioning or navigation. Different types of generalised DOPs are suitable for different PNT scenarios and an example for the calculation of these DOPs for multi-GNSS systems including GPS, GLONASS, Compass and Galileo is given. New observation equations of Compass and GLONASS that may contain additional parameters for interoperability are specifically investigated. It shows that if the interoperability of multi-GNSS is not fulfilled, the increased number of satellites will not significantly reduce the generalised DOP value. Furthermore, the outlying measurements will not change the original DOP, but will change the first type of generalised DOP which includes a robust error IF. A priori information of the model parameters will also reduce the DOP.


1981 ◽  
Vol 240 (5) ◽  
pp. R259-R265 ◽  
Author(s):  
J. J. DiStefano

Design of optimal blood sampling protocols for kinetic experiments is discussed and evaluated, with the aid of several examples--including an endocrine system case study. The criterion of optimality is maximum accuracy of kinetic model parameter estimates. A simple example illustrates why a sequential experiment approach is required; optimal designs depend on the true model parameter values, knowledge of which is usually a primary objective of the experiment, as well as the structure of the model and the measurement error (e.g., assay) variance. The methodology is evaluated from the results of a series of experiments designed to quantify the dynamics of distribution and metabolism of three iodothyronines, T3, T4, and reverse-T3. This analysis indicates that 1) the sequential optimal experiment approach can be effective and efficient in the laboratory, 2) it works in the presence of reasonably controlled biological variation, producing sufficiently robust sampling protocols, and 3) optimal designs can be highly efficient designs in practice, requiring for maximum accuracy a number of blood samples equal to the number of independently adjustable model parameters, no more or less.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yang Liu ◽  
Penghao Wang ◽  
Melissa L. Thomas ◽  
Dan Zheng ◽  
Simon J. McKirdy

AbstractInvasive species can lead to community-level damage to the invaded ecosystem and extinction of native species. Most surveillance systems for the detection of invasive species are developed based on expert assessment, inherently coming with a level of uncertainty. In this research, info-gap decision theory (IGDT) is applied to model and manage such uncertainty. Surveillance of the Asian House Gecko, Hemidactylus frenatus Duméril and Bibron, 1836 on Barrow Island, is used as a case study. Our research provides a novel method for applying IGDT to determine the population threshold ($$K$$ K ) so that the decision can be robust to the deep uncertainty present in model parameters. We further robust-optimize surveillance costs rather than minimize surveillance costs. We demonstrate that increasing the population threshold for detection increases both robustness to the errors in the model parameter estimates, and opportuneness to lower surveillance costs than the accepted maximum budget. This paper provides guidance for decision makers to balance robustness and required surveillance expenditure. IGDT offers a novel method to model and manage the uncertainty prevalent in biodiversity conservation practices and modelling. The method outlined here can be used to design robust surveillance systems for invasive species in a wider context, and to better tackle uncertainty in protection of biodiversity and native species in a cost-effective manner.


2020 ◽  
Vol 17 (173) ◽  
pp. 20200886
Author(s):  
L. Mihaela Paun ◽  
Mitchel J. Colebank ◽  
Mette S. Olufsen ◽  
Nicholas A. Hill ◽  
Dirk Husmeier

This study uses Bayesian inference to quantify the uncertainty of model parameters and haemodynamic predictions in a one-dimensional pulmonary circulation model based on an integration of mouse haemodynamic and micro-computed tomography imaging data. We emphasize an often neglected, though important source of uncertainty: in the mathematical model form due to the discrepancy between the model and the reality, and in the measurements due to the wrong noise model (jointly called ‘model mismatch’). We demonstrate that minimizing the mean squared error between the measured and the predicted data (the conventional method) in the presence of model mismatch leads to biased and overly confident parameter estimates and haemodynamic predictions. We show that our proposed method allowing for model mismatch, which we represent with Gaussian processes, corrects the bias. Additionally, we compare a linear and a nonlinear wall model, as well as models with different vessel stiffness relations. We use formal model selection analysis based on the Watanabe Akaike information criterion to select the model that best predicts the pulmonary haemodynamics. Results show that the nonlinear pressure–area relationship with stiffness dependent on the unstressed radius predicts best the data measured in a control mouse.


1993 ◽  
Vol 57 (1) ◽  
pp. 99-104 ◽  
Author(s):  
J. C. Williams

AbstractThe following goat lactation model was fitted (using non-linear regression) to 407 lactations from five commercial goat dairies and one Research Institute goat herd: y = A exp (B(l + n'/2)n' + Cn' 2 - 1·01/n) where y = daily yield in kg; n = day of lactation (post parturition); and n' = (n -150)1100.Influence of farm, parity and season on the parameter estimates for 376 individual lactations was studied, using multiple linear regression. The models adopted were of the form: A = 1·366 + 1·122 × parity - 0·137 × parity2; ln(-B) = - 1·711 + 0·107 × parity + 0·512 season one; C = 0·037, with a standard deviation for A of 0·658, for ln(-B) of 0·636 and for C of 0·127.Influence of litter size on parameters was investigated for the Research Institute herd. There was no evidence of an effect on any of the model parameters.


2017 ◽  
Vol 6 (4) ◽  
pp. 236
Author(s):  
Chikashi Tsuji

This paper attempts to derive careful interpretation of the parameter estimates from one of the multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) models, the full vector-half (VECH) model with asymmetric effects. We also consider and interpret the parameter estimates from a case study of US and Canadian equity index returns by applying this model. More specifically, we firstly inspect the model formula and derive general interpretation of the model parameters. We consider this is particularly useful for understanding not only the full VECH model structure but also similar MGARCH models. After the general considerations, we also interpret the case results that are derived from our application of the full VECH model to US and Canadian equity index returns. We consider that these concrete illustrations are also very helpful for future related research.


2019 ◽  
Vol 57 (1) ◽  
pp. 55-77 ◽  
Author(s):  
Ryan Dew ◽  
Asim Ansari ◽  
Yang Li

Marketing research relies on individual-level estimates to understand the rich heterogeneity of consumers, firms, and products. While much of the literature focuses on capturing static cross-sectional heterogeneity, little research has been done on modeling dynamic heterogeneity, or the heterogeneous evolution of individual-level model parameters. In this work, the authors propose a novel framework for capturing the dynamics of heterogeneity, using individual-level, latent, Bayesian nonparametric Gaussian processes. Similar to standard heterogeneity specifications, this Gaussian process dynamic heterogeneity (GPDH) specification models individual-level parameters as flexible variations around population-level trends, allowing for sharing of statistical information both across individuals and within individuals over time. This hierarchical structure provides precise individual-level insights regarding parameter dynamics. The authors show that GPDH nests existing heterogeneity specifications and that not flexibly capturing individual-level dynamics may result in biased parameter estimates. Substantively, they apply GPDH to understand preference dynamics and to model the evolution of online reviews. Across both applications, they find robust evidence of dynamic heterogeneity and illustrate GPDH’s rich managerial insights, with implications for targeting, pricing, and market structure analysis.


2002 ◽  
Vol 6 (5) ◽  
pp. 883-898 ◽  
Author(s):  
K. Engeland ◽  
L. Gottschalk

Abstract. This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC) analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1) process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis


Sign in / Sign up

Export Citation Format

Share Document