STUDYING THE IDENTIFIABILITY OF EPIDEMIOLOGICAL MODELS USING MCMC

2013 ◽  
Vol 06 (02) ◽  
pp. 1350008 ◽  
Author(s):  
ANTTI SOLONEN ◽  
HEIKKI HAARIO ◽  
JEAN MICHEL TCHUENCHE ◽  
HERIETH RWEZAURA

Studying different theoretical properties of epidemiological models has been widely addressed, while numerical studies and especially the calibration of models, which are often complicated and loaded with a high number of unknown parameters, against measured data have received less attention. In this paper, we describe how a combination of simulated data and Markov Chain Monte Carlo (MCMC) methods can be used to study the identifiability of model parameters with different type of measurements. Three known models are used as case studies to illustrate the importance of parameter identifiability: a basic SIR model, an influenza model with vaccination and treatment and a HIV–Malaria co-infection model. The analysis reveals that calibration of complex models commonly studied in mathematical epidemiology, such as the HIV–Malaria co-dynamics model, can be difficult or impossible, even if the system would be fully observed. The presented approach provides a tool for design and optimization of real-life field campaigns of collecting data, as well as for model selection.

Author(s):  
Leila Taghizadeh ◽  
Ahmad Karimi ◽  
Clemens Heitzinger

AbstractThe main goal of this paper is to develop the forward and inverse modeling of the Coronavirus (COVID-19) pandemic using novel computational methodologies in order to accurately estimate and predict the pandemic. This leads to governmental decisions support in implementing effective protective measures and prevention of new outbreaks. To this end, we use the logistic equation and the SIR system of ordinary differential equations to model the spread of the COVID-19 pandemic. For the inverse modeling, we propose Bayesian inversion techniques, which are robust and reliable approaches, in order to estimate the unknown parameters of the epidemiological models. We use an adaptive Markov-chain Monte-Carlo (MCMC) algorithm for the estimation of a posteriori probability distribution and confidence intervals for the unknown model parameters as well as for the reproduction number. Furthermore, we present a fatality analysis for COVID-19 in Austria, which is also of importance for governmental protective decision making. We perform our analyses on the publicly available data for Austria to estimate the main epidemiological model parameters and to study the effectiveness of the protective measures by the Austrian government. The estimated parameters and the analysis of fatalities provide useful information for decision makers and makes it possible to perform more realistic forecasts of future outbreaks.


2019 ◽  
Vol 36 (8) ◽  
pp. 1804-1816 ◽  
Author(s):  
Timothy G Vaughan ◽  
Gabriel E Leventhal ◽  
David A Rasmussen ◽  
Alexei J Drummond ◽  
David Welch ◽  
...  

Abstract Modern phylodynamic methods interpret an inferred phylogenetic tree as a partial transmission chain providing information about the dynamic process of transmission and removal (where removal may be due to recovery, death, or behavior change). Birth–death and coalescent processes have been introduced to model the stochastic dynamics of epidemic spread under common epidemiological models such as the SIS and SIR models and are successfully used to infer phylogenetic trees together with transmission (birth) and removal (death) rates. These methods either integrate analytically over past incidence and prevalence to infer rate parameters, and thus cannot explicitly infer past incidence or prevalence, or allow such inference only in the coalescent limit of large population size. Here, we introduce a particle filtering framework to explicitly infer prevalence and incidence trajectories along with phylogenies and epidemiological model parameters from genomic sequences and case count data in a manner consistent with the underlying birth–death model. After demonstrating the accuracy of this method on simulated data, we use it to assess the prevalence through time of the early 2014 Ebola outbreak in Sierra Leone.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Ahmed Z. Afify ◽  
Hassan M. Aljohani ◽  
Abdulaziz S. Alghamdi ◽  
Ahmed M. Gemeay ◽  
Abdullah M. Sarg

This article introduces a two-parameter flexible extension of the Burr-Hatke distribution using the inverse-power transformation. The failure rate of the new distribution can be an increasing shape, a decreasing shape, or an upside-down bathtub shape. Some of its mathematical properties are calculated. Ten estimation methods, including classical and Bayesian techniques, are discussed to estimate the model parameters. The Bayes estimators for the unknown parameters, based on the squared error, general entropy, and linear exponential loss functions, are provided. The ranking and behavior of these methods are assessed by simulation results with their partial and overall ranks. Finally, the flexibility of the proposed distribution is illustrated empirically using two real-life datasets. The analyzed data shows that the introduced distribution provides a superior fit than some important competing distributions such as the Weibull, Fréchet, gamma, exponential, inverse log-logistic, inverse weighted Lindley, inverse Pareto, inverse Nakagami-M, and Burr-Hatke distributions.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249001
Author(s):  
Ahtasham Gul ◽  
Muhammad Mohsin ◽  
Muhammad Adil ◽  
Mansoor Ali

Truncated models are imperative to efficiently analyze the finite data that we observe in almost all the real life situations. In this paper, a new truncated distribution having four parameters named Weibull-Truncated Exponential Distribution (W-TEXPD) is developed. The proposed model can be used as an alternative to the Exponential, standard Weibull and shifted Gamma-Weibull and three parameter Weibull distributions. The statistical characteristics including cumulative distribution function, hazard function, cumulative hazard function, central moments, skewness, kurtosis, percentile and entropy of the proposed model are derived. The maximum likelihood estimation method is employed to evaluate the unknown parameters of the W-TEXPD. A simulation study is also carried out to assess the performance of the model parameters. The proposed probability distribution is fitted on five data sets from different fields to demonstrate its vast application. A comparison of the proposed model with some extant models is given to justify the performance of the W-TEXPD.


2017 ◽  
Author(s):  
Timothy G. Vaughan ◽  
Gabriel E. Leventhal ◽  
David A. Rasmussen ◽  
Alexei J. Drummond ◽  
David Welch ◽  
...  

AbstractModern phylodynamic methods interpret an inferred phylogenetic tree as a partial transmission chain providing information about the dynamic process of transmission and removal (where removal may be due to recovery, death or behaviour change). Birth-death and coalescent processes have been introduced to model the stochastic dynamics of epidemic spread under common epidemiological models such as the SIS and SIR models, and are successfully used to infer phylogenetic trees together with transmission (birth) and removal (death) rates. These methods either integrate analytically over past incidence and prevalence to infer rate parameters, and thus cannot explicitly infer past incidence or prevalence, or allow such inference only in the coalescent limit of large population size. Here we introduce a particle filtering framework to explicitly infer prevalence and incidence trajectories along with phylogenies and epidemiological model parameters from genomic sequences and case count data in a manner consistent with the underlying birth-death model. After demonstrating the accuracy of this method on simulated data, we use it to assess the prevalence through time of the early 2014 Ebola outbreak in Sierra Leone.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Huda M. Alshanbari ◽  
Muhammad Ijaz ◽  
Syed Muhammad Asim ◽  
Abd Al-Aziz Hosni El-Bagoury ◽  
Javid Gani Dar

The rationale of the paper is to present a new probability distribution that can model both the monotonic and nonmonotonic hazard rate shapes and to increase their flexibility among other probability distributions available in the literature. The proposed probability distribution is called the New Weighted Lomax (NWL) distribution. Various statistical properties have been studied including with the estimation of the unknown parameters. To achieve the basic objectives, applications of NWL are presented by means of two real-life data sets as well as a simulated data. It is verified that NWL performs well in both monotonic and nonmonotonic hazard rate function than the Lomax (L), Power Lomax (PL), Exponential Lomax (EL), and Weibull Lomax (WL) distribution.


Author(s):  
George N. Wong ◽  
Zachary J. Weiner ◽  
Alexei V. Tkachenko ◽  
Ahmed Elbanna ◽  
Sergei Maslov ◽  
...  

We present modeling of the COVID-19 epidemic in Illinois, USA, capturing the implementation of a Stay-at-Home order and scenarios for its eventual release. We use a non-Markovian age-of-infection model that is capable of handling long and variable time delays without changing its model topology. Bayesian estimation of model parameters is carried out using Markov Chain Monte Carlo (MCMC) methods. This framework allows us to treat all available input information, including both the previously published parameters of the epidemic and available local data, in a uniform manner. To accurately model deaths as well as demand on the healthcare system, we calibrate our predictions to total and in-hospital deaths as well as hospital and ICU bed occupancy by COVID-19 patients. We apply this model not only to the state as a whole but also its sub-regions in order to account for the wide disparities in population size and density. Without prior information on non-pharmaceutical interventions (NPIs), the model independently reproduces a mitigation trend closely matching mobility data reported by Google and Unacast. Forward predictions of the model provide robust estimates of the peak position and severity and also enable forecasting the regional-dependent results of releasing Stay-at-Home orders. The resulting highly constrained narrative of the epidemic is able to provide estimates of its unseen progression and inform scenarios for sustainable monitoring and control of the epidemic.


2018 ◽  
Author(s):  
Josephine Ann Urquhart ◽  
Akira O'Connor

Receiver operating characteristics (ROCs) are plots which provide a visual summary of a classifier’s decision response accuracy at varying discrimination thresholds. Typical practice, particularly within psychological studies, involves plotting an ROC from a limited number of discrete thresholds before fitting signal detection parameters to the plot. We propose that additional insight into decision-making could be gained through increasing ROC resolution, using trial-by-trial measurements derived from a continuous variable, in place of discrete discrimination thresholds. Such continuous ROCs are not yet routinely used in behavioural research, which we attribute to issues of practicality (i.e. the difficulty of applying standard ROC model-fitting methodologies to continuous data). Consequently, the purpose of the current article is to provide a documented method of fitting signal detection parameters to continuous ROCs. This method reliably produces model fits equivalent to the unequal variance least squares method of model-fitting (Yonelinas et al., 1998), irrespective of the number of data points used in ROC construction. We present the suggested method in three main stages: I) building continuous ROCs, II) model-fitting to continuous ROCs and III) extracting model parameters from continuous ROCs. Throughout the article, procedures are demonstrated in Microsoft Excel, using an example continuous variable: reaction time, taken from a single-item recognition memory. Supplementary MATLAB code used for automating our procedures is also presented in Appendix B, with a validation of the procedure using simulated data shown in Appendix C.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 726
Author(s):  
Lamya A. Baharith ◽  
Wedad H. Aljuhani

This article presents a new method for generating distributions. This method combines two techniques—the transformed—transformer and alpha power transformation approaches—allowing for tremendous flexibility in the resulting distributions. The new approach is applied to introduce the alpha power Weibull—exponential distribution. The density of this distribution can take asymmetric and near-symmetric shapes. Various asymmetric shapes, such as decreasing, increasing, L-shaped, near-symmetrical, and right-skewed shapes, are observed for the related failure rate function, making it more tractable for many modeling applications. Some significant mathematical features of the suggested distribution are determined. Estimates of the unknown parameters of the proposed distribution are obtained using the maximum likelihood method. Furthermore, some numerical studies were carried out, in order to evaluate the estimation performance. Three practical datasets are considered to analyze the usefulness and flexibility of the introduced distribution. The proposed alpha power Weibull–exponential distribution can outperform other well-known distributions, showing its great adaptability in the context of real data analysis.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 384
Author(s):  
Rocío Hernández-Sanjaime ◽  
Martín González ◽  
Antonio Peñalver ◽  
Jose J. López-Espín

The presence of unaccounted heterogeneity in simultaneous equation models (SEMs) is frequently problematic in many real-life applications. Under the usual assumption of homogeneity, the model can be seriously misspecified, and it can potentially induce an important bias in the parameter estimates. This paper focuses on SEMs in which data are heterogeneous and tend to form clustering structures in the endogenous-variable dataset. Because the identification of different clusters is not straightforward, a two-step strategy that first forms groups among the endogenous observations and then uses the standard simultaneous equation scheme is provided. Methodologically, the proposed approach is based on a variational Bayes learning algorithm and does not need to be executed for varying numbers of groups in order to identify the one that adequately fits the data. We describe the statistical theory, evaluate the performance of the suggested algorithm by using simulated data, and apply the two-step method to a macroeconomic problem.


Sign in / Sign up

Export Citation Format

Share Document