scholarly journals Data-driven prediction and origin identification of epidemics in population networks

2021 ◽  
Vol 8 (1) ◽  
pp. 200531
Author(s):  
Karen Larson ◽  
Georgios Arampatzis ◽  
Clark Bowman ◽  
Zhizhong Chen ◽  
Panagiotis Hadjidoukas ◽  
...  

Effective intervention strategies for epidemics rely on the identification of their origin and on the robustness of the predictions made by network disease models. We introduce a Bayesian uncertainty quantification framework to infer model parameters for a disease spreading on a network of communities from limited, noisy observations; the state-of-the-art computational framework compensates for the model complexity by exploiting massively parallel computing architectures. Using noisy, synthetic data, we show the potential of the approach to perform robust model fitting and additionally demonstrate that we can effectively identify the disease origin via Bayesian model selection. As disease-related data are increasingly available, the proposed framework has broad practical relevance for the prediction and management of epidemics.

Author(s):  
Christopher J. Arthurs ◽  
Nan Xiao ◽  
Philippe Moireau ◽  
Tobias Schaeffter ◽  
C. Alberto Figueroa

AbstractA major challenge in constructing three dimensional patient specific hemodynamic models is the calibration of model parameters to match patient data on flow, pressure, wall motion, etc. acquired in the clinic. Current workflows are manual and time-consuming. This work presents a flexible computational framework for model parameter estimation in cardiovascular flows that relies on the following fundamental contributions. (i) A Reduced-Order Unscented Kalman Filter (ROUKF) model for data assimilation for wall material and simple lumped parameter network (LPN) boundary condition model parameters. (ii) A constrained least squares augmentation (ROUKF-CLS) for more complex LPNs. (iii) A “Netlist” implementation, supporting easy filtering of parameters in such complex LPNs. The ROUKF algorithm is demonstrated using non-invasive patient-specific data on anatomy, flow and pressure from a healthy volunteer. The ROUKF-CLS algorithm is demonstrated using synthetic data on a coronary LPN. The methods described in this paper have been implemented as part of the CRIMSON hemodynamics software package.


Author(s):  
Choujun Zhan ◽  
Chi K. Tse ◽  
Yuxia Fu ◽  
Zhikang Lai ◽  
Haijun Zhang

AbstractThis study integrates the daily intercity migration data with the classic Susceptible-Exposed-Infected-Removed (SEIR) model to construct a new model suitable for describing the dynamics of epidemic spreading of Coronavirus Disease 2019 (COVID-19) in China. Daily intercity migration data for 367 cities in China are collected from Baidu Migration, a mobile-app based human migration tracking data system. Historical data of infected, recovered and death cases from official source are used for model fitting. The set of model parameters obtained from best data fitting using a constrained nonlinear optimization procedure is used for estimation of the dynamics of epidemic spreading in the coming weeks. Our results show that the number of infections in most cities in China will peak between mid February to early March 2020, with about 0.8%, less than 0.1% and less than 0.01% of the population eventually infected in Wuhan, Hubei Province and the rest of China, respectively.


2018 ◽  
Author(s):  
Josephine Ann Urquhart ◽  
Akira O'Connor

Receiver operating characteristics (ROCs) are plots which provide a visual summary of a classifier’s decision response accuracy at varying discrimination thresholds. Typical practice, particularly within psychological studies, involves plotting an ROC from a limited number of discrete thresholds before fitting signal detection parameters to the plot. We propose that additional insight into decision-making could be gained through increasing ROC resolution, using trial-by-trial measurements derived from a continuous variable, in place of discrete discrimination thresholds. Such continuous ROCs are not yet routinely used in behavioural research, which we attribute to issues of practicality (i.e. the difficulty of applying standard ROC model-fitting methodologies to continuous data). Consequently, the purpose of the current article is to provide a documented method of fitting signal detection parameters to continuous ROCs. This method reliably produces model fits equivalent to the unequal variance least squares method of model-fitting (Yonelinas et al., 1998), irrespective of the number of data points used in ROC construction. We present the suggested method in three main stages: I) building continuous ROCs, II) model-fitting to continuous ROCs and III) extracting model parameters from continuous ROCs. Throughout the article, procedures are demonstrated in Microsoft Excel, using an example continuous variable: reaction time, taken from a single-item recognition memory. Supplementary MATLAB code used for automating our procedures is also presented in Appendix B, with a validation of the procedure using simulated data shown in Appendix C.


2021 ◽  
Author(s):  
Mikhail Sviridov ◽  
◽  
Anton Mosin ◽  
Sergey Lebedev ◽  
Ron Thompson ◽  
...  

While proactive geosteering, special inversion algorithms are used to process the readings of logging-while-drilling resistivity tools in real-time and provide oil field operators with formation models to make informed steering decisions. Currently, there is no industry standard for inversion deliverables and corresponding quality indicators because major tool vendors develop their own device-specific algorithms and use them internally. This paper presents the first implementation of vendor-neutral inversion approach applicable for any induction resistivity tool and enabling operators to standardize the efficiency of various geosteering services. The necessity of such universal inversion approach was inspired by the activity of LWD Deep Azimuthal Resistivity Services Standardization Workgroup initiated by SPWLA Resistivity Special Interest Group in 2016. Proposed inversion algorithm utilizes a 1D layer-cake formation model and is performed interval-by-interval. The following model parameters can be determined: horizontal and vertical resistivities of each layer, positions of layer boundaries, and formation dip. The inversion can support arbitrary deep azimuthal induction resistivity tool with coaxial, tilted, or orthogonal transmitting and receiving antennas. The inversion is purely data-driven; it works in automatic mode and provides fully unbiased results obtained from tool readings only. The algorithm is based on statistical reversible-jump Markov chain Monte Carlo method that does not require any predefined assumptions about the formation structure and enables searching of models explaining the data even if the number of layers in the model is unknown. To globalize search, the algorithm runs several Markov chains capable of exchanging their states between one another to move from the vicinity of local minimum to more perspective domain of model parameter space. While execution, the inversion keeps all models it is dealing with to estimate the resolution accuracy of formation parameters and generate several quality indicators. Eventually, these indicators are delivered together with recovered resistivity models to help operators with the evaluation of inversion results reliability. To ensure high performance of the inversion, a fast and accurate semi-analytical forward solver is employed to compute required responses of a tool with specific geometry and their derivatives with respect to any parameter of multi-layered model. Moreover, the reliance on the simultaneous evolution of multiple Markov chains makes the algorithm suitable for parallel execution that significantly decreases the computational time. Application of the proposed inversion is shown on a series of synthetic examples and field case studies such as navigating the well along the reservoir roof or near the oil-water-contact in oil sands. Inversion results for all scenarios confirm that the proposed algorithm can successfully evaluate formation model complexity, recover model parameters, and quantify their uncertainty within a reasonable computational time. Presented vendor-neutral stochastic approach to data processing leads to the standardization of the inversion output including the resistivity model and its quality indicators that helps operators to better understand capabilities of tools from different vendors and eventually make more confident geosteering decisions.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Lev V. Utkin

A fuzzy classification model is studied in the paper. It is based on the contaminated (robust) model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.


Author(s):  
Siba Monther Yousif ◽  
Roslina M. Sidek ◽  
Anwer Sabah Mekki ◽  
Nasri Sulaiman ◽  
Pooria Varahram

<span lang="EN-US">In this paper, a low-complexity model is proposed for linearizing power amplifiers with memory effects using the digital predistortion (DPD) technique. In the proposed model, the linear, low-order nonlinear and high-order nonlinear memory effects are computed separately to provide flexibility in controlling the model parameters so that both high performance and low model complexity can be achieved. The performance of the proposed model is assessed based on experimental measurements of a commercial class AB power amplifier by applying a single-carrier wideband code division multiple access (WCDMA) signal. The linearity performance and the model complexity of the proposed model are compared with the memory polynomial (MP) model and the DPD with single-feedback model. The experimental results show that the proposed model outperforms the latter model by 5 dB in terms of adjacent channel leakage power ratio (ACLR) with comparable complexity. Compared to MP model, the proposed model shows improved ACLR performance by 10.8 dB with a reduction in the complexity by 17% in terms of number of floating-point operations (FLOPs) and 18% in terms of number of model coefficients.</span>


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


2019 ◽  
Vol 7 (1) ◽  
pp. 13-27
Author(s):  
Safaa K. Kadhem ◽  
Sadeq A. Kadhim

"This paper aims at the modeling the crashes count in Al Muthanna governance using finite mixture model. We use one of the most common MCMC method which is called the Gibbs sampler to implement the Bayesian inference for estimating the model parameters. We perform a simulation study, based on synthetic data, to check the ability of the sampler to find the best estimates of the model. We use the two well-known criteria, which are the AIC and BIC, to determine the best model fitted to the data. Finally, we apply our sampler to model the crashes count in Al Muthanna governance.


2021 ◽  
Author(s):  
Lilian Schuster ◽  
David Rounce ◽  
Fabien Maussion

&lt;p&gt;A recent large model intercomparison study (GlacierMIP) showed that differences between the glacier models is a dominant source of uncertainty for future glacier change projections, in particular in the first half of the century.&amp;#160; Each glacier model has their own unique set of process representations and climate forcing methodology, which makes it impossible to determine the model components that contribute most to the projection uncertainty. This study aims to improve our understanding of the sources of large scale glacier model uncertainty using the Open Global Glacier Model (OGGM), focussing on the surface mass balance (SMB) in a first step. We calibrate and run a set of interchangeable SMB model parameterizations (e.g. monthly vs. daily, constant vs. variable lapse rates, albedo, snowpack evolution and refreezing) under controlled boundary conditions. Based on ensemble approaches, we explore the influence of (i) the parameter calibration strategy and (ii) SMB model complexity on regional to global glacier change. These uncertainties are then put in relation to a qualitative selection of other model design choices, such as the forcing climate dataset and ice dynamics model parameters.&amp;#160;&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document