scholarly journals Predicting the second wave of COVID-19 in Washtenaw County, MI

Author(s):  
Marissa Renardy ◽  
Denise Kirschner

AbstractThe COVID-19 pandemic has highlighted the patchwork nature of disease epidemics, with infection spread dynamics varying wildly across countries and across states within the US. These heterogeneous patterns are also observed within individual states, with patches of concentrated outbreaks. Data is being generated daily at all of these spatial scales, and answers to questions regarded reopening strategies are desperately needed. Mathematical modeling is useful in exactly these cases, and using modeling at a county scale may be valuable to further predict disease dynamics for the purposes of public health interventions. To explore this issue, we study and predict the spread of COVID-19 in Washtenaw County, MI, the home to University of Michigan, Eastern Michigan University, and Google, as well as serving as a sister city to Detroit, MI where there has been a serious outbreak. Here, we apply a discrete and stochastic network-based modeling framework allowing us to track every individual in the county. In this framework, we construct contact networks based on synthetic population datasets specific for Washtenaw County that are derived from US Census datasets. We assign individuals to households, workplaces, schools, and group quarters (such as prisons). In addition, we assign casual contacts to each individual at random. Using this framework, we explicitly simulate Michigan-specific government-mandated workplace and school closures as well as social distancing measures. We also perform sensitivity analyses to identify key model parameters and mechanisms contributing to the observed disease burden in the three months following the first observed cases on COVID-19 in Michigan. We then consider several scenarios for relaxing restrictions and reopening workplaces to predict what actions would be most prudent. In particular, we consider the effects of 1) different timings for reopening, and 2) different levels of workplace vs. casual contact re-engagement. Through simulations and sensitivity analyses, we explore mechanisms driving magnitude and timing of a second wave of infections upon re-opening. This model can be adapted to other US counties using synthetic population databases and data specific to those regions.

The Condor ◽  
2018 ◽  
Vol 120 (4) ◽  
pp. 765-786 ◽  
Author(s):  
Péter Sólymos ◽  
Steven M. Matsuoka ◽  
Steven G. Cumming ◽  
Diana Stralberg ◽  
Patricia Fontaine ◽  
...  

Abstract We used conventional and finite mixture removal models with and without time-varying covariates to evaluate availability given presence for 152 bird species using data from point counts in boreal North America. We found that the choice of model had an impact on the estimability of unknown model parameters and affected the bias and variance of corrected counts. Finite mixture models provided better fit than conventional removal models and better adjusted for count duration. However, reliably estimating parameters and minimizing variance using mixture models required at least 200–1,000 detections. Mixture models with time-varying proportions of infrequent singers were best supported across species, indicating that accounting for date- and time-related heterogeneity is important when combining data across studies over large spatial scales, multiple sampling time frames, or variable survey protocols. Our flexible and continuous time-removal modeling framework can be used to account for such heterogeneity through the incorporation of easily obtainable covariates, such as methods, date, time, and location. Accounting for availability bias in bird surveys allows for better integration of disparate studies at large spatial scales and better adjustment of local, regional, and continental population size estimates.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Guillaume Ropp ◽  
Vincent Lesur ◽  
Julien Baerenzung ◽  
Matthias Holschneider

Abstract We describe a new, original approach to the modelling of the Earth’s magnetic field. The overall objective of this study is to reliably render fast variations of the core field and its secular variation. This method combines a sequential modelling approach, a Kalman filter, and a correlation-based modelling step. Sources that most significantly contribute to the field measured at the surface of the Earth are modelled. Their separation is based on strong prior information on their spatial and temporal behaviours. We obtain a time series of model distributions which display behaviours similar to those of recent models based on more classic approaches, particularly at large temporal and spatial scales. Interesting new features and periodicities are visible in our models at smaller time and spatial scales. An important aspect of our method is to yield reliable error bars for all model parameters. These errors, however, are only as reliable as the description of the different sources and the prior information used are realistic. Finally, we used a slightly different version of our method to produce candidate models for the thirteenth edition of the International Geomagnetic Reference Field.


Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1610
Author(s):  
Katia Colaneri ◽  
Alessandra Cretarola ◽  
Benedetta Salterini

In this paper, we study the optimal investment and reinsurance problem of an insurance company whose investment preferences are described via a forward dynamic exponential utility in a regime-switching market model. Financial and actuarial frameworks are dependent since stock prices and insurance claims vary according to a common factor given by a continuous time finite state Markov chain. We construct the value function and we prove that it is a forward dynamic utility. Then, we characterize the optimal investment strategy and the optimal proportional level of reinsurance. We also perform numerical experiments and provide sensitivity analyses with respect to some model parameters.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Mohammed A. Aba Oud ◽  
Aatif Ali ◽  
Hussam Alrabaiah ◽  
Saif Ullah ◽  
Muhammad Altaf Khan ◽  
...  

AbstractCOVID-19 or coronavirus is a newly emerged infectious disease that started in Wuhan, China, in December 2019 and spread worldwide very quickly. Although the recovery rate is greater than the death rate, the COVID-19 infection is becoming very harmful for the human community and causing financial loses to their economy. No proper vaccine for this infection has been introduced in the market in order to treat the infected people. Various approaches have been implemented recently to study the dynamics of this novel infection. Mathematical models are one of the effective tools in this regard to understand the transmission patterns of COVID-19. In the present paper, we formulate a fractional epidemic model in the Caputo sense with the consideration of quarantine, isolation, and environmental impacts to examine the dynamics of the COVID-19 outbreak. The fractional models are quite useful for understanding better the disease epidemics as well as capture the memory and nonlocality effects. First, we construct the model in ordinary differential equations and further consider the Caputo operator to formulate its fractional derivative. We present some of the necessary mathematical analysis for the fractional model. Furthermore, the model is fitted to the reported cases in Pakistan, one of the epicenters of COVID-19 in Asia. The estimated value of the important threshold parameter of the model, known as the basic reproduction number, is evaluated theoretically and numerically. Based on the real fitted parameters, we obtained $\mathcal{R}_{0} \approx 1.50$ R 0 ≈ 1.50 . Finally, an efficient numerical scheme of Adams–Moulton type is used in order to simulate the fractional model. The impact of some of the key model parameters on the disease dynamics and its elimination are shown graphically for various values of noninteger order of the Caputo derivative. We conclude that the use of fractional epidemic model provides a better understanding and biologically more insights about the disease dynamics.


2012 ◽  
Vol 2012 ◽  
pp. 1-6 ◽  
Author(s):  
Eric Jutkowitz ◽  
Laura N. Gitlin ◽  
Laura T. Pizzi ◽  
Edward Lee ◽  
Marie P. Dennis

Evaluating cost effectiveness of interventions for aging in place is essential for adoption in service settings. We present the cost effectiveness of Advancing Better Living for Elders (ABLE), previously shown in a randomized trial to reduce functional difficulties and mortality in 319 community-dwelling elders. ABLE involved occupational and physical therapy sessions and home modifications to address client-identified functional difficulties, performance goals, and home safety. Incremental cost-effectiveness ratio (ICER), expressed as additional cost to bring about one additional year of life, was calculated. Two models were then developed to account for potential cost differences in implementing ABLE. Probabilistic sensitivity analyses were conducted to account for variations in model parameters. By two years, there were 30 deaths (9: ABLE; 21: control). Additional costs for 1 additional year of life was $13,179 for Model 1 and $14,800 for Model 2. Investment in ABLE may be worthwhile depending on society's willingness to pay.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009332
Author(s):  
Fredrik Allenmark ◽  
Ahu Gokce ◽  
Thomas Geyer ◽  
Artyom Zinchenko ◽  
Hermann J. Müller ◽  
...  

In visual search tasks, repeating features or the position of the target results in faster response times. Such inter-trial ‘priming’ effects occur not just for repetitions from the immediately preceding trial but also from trials further back. A paradigm known to produce particularly long-lasting inter-trial effects–of the target-defining feature, target position, and response (feature)–is the ‘priming of pop-out’ (PoP) paradigm, which typically uses sparse search displays and random swapping across trials of target- and distractor-defining features. However, the mechanisms underlying these inter-trial effects are still not well understood. To address this, we applied a modeling framework combining an evidence accumulation (EA) model with different computational updating rules of the model parameters (i.e., the drift rate and starting point of EA) for different aspects of stimulus history, to data from a (previously published) PoP study that had revealed significant inter-trial effects from several trials back for repetitions of the target color, the target position, and (response-critical) target feature. By performing a systematic model comparison, we aimed to determine which EA model parameter and which updating rule for that parameter best accounts for each inter-trial effect and the associated n-back temporal profile. We found that, in general, our modeling framework could accurately predict the n-back temporal profiles. Further, target color- and position-based inter-trial effects were best understood as arising from redistribution of a limited-capacity weight resource which determines the EA rate. In contrast, response-based inter-trial effects were best explained by a bias of the starting point towards the response associated with a previous target; this bias appeared largely tied to the position of the target. These findings elucidate how our cognitive system continually tracks, and updates an internal predictive model of, a number of separable stimulus and response parameters in order to optimize task performance.


2021 ◽  
Vol 15 (2) ◽  
pp. 615-632
Author(s):  
Nora Helbig ◽  
Yves Bühler ◽  
Lucie Eberhard ◽  
César Deschamps-Berger ◽  
Simon Gascoin ◽  
...  

Abstract. The spatial distribution of snow in the mountains is significantly influenced through interactions of topography with wind, precipitation, shortwave and longwave radiation, and avalanches that may relocate the accumulated snow. One of the most crucial model parameters for various applications such as weather forecasts, climate predictions and hydrological modeling is the fraction of the ground surface that is covered by snow, also called fractional snow-covered area (fSCA). While previous subgrid parameterizations for the spatial snow depth distribution and fSCA work well, performances were scale-dependent. Here, we were able to confirm a previously established empirical relationship of peak of winter parameterization for the standard deviation of snow depth σHS by evaluating it with 11 spatial snow depth data sets from 7 different geographic regions and snow climates with resolutions ranging from 0.1 to 3 m. An enhanced performance (mean percentage errors, MPE, decreased by 25 %) across all spatial scales ≥ 200 m was achieved by recalibrating and introducing a scale-dependency in the dominant scaling variables. Scale-dependent MPEs vary between −7 % and 3 % for σHS and between 0 % and 1 % for fSCA. We performed a scale- and region-dependent evaluation of the parameterizations to assess the potential performances with independent data sets. This evaluation revealed that for the majority of the regions, the MPEs mostly lie between ±10 % for σHS and between −1 % and 1.5 % for fSCA. This suggests that the new parameterizations perform similarly well in most geographical regions.


2019 ◽  
Vol 116 (27) ◽  
pp. 13174-13181 ◽  
Author(s):  
Maria Litvinova ◽  
Quan-Hui Liu ◽  
Evgeny S. Kulikov ◽  
Marco Ajelli

School-closure policies are considered one of the most promising nonpharmaceutical interventions for mitigating seasonal and pandemic influenza. However, their effectiveness is still debated, primarily due to the lack of empirical evidence about the behavior of the population during the implementation of the policy. Over the course of the 2015 to 2016 influenza season in Russia, we performed a diary-based contact survey to estimate the patterns of social interactions before and during the implementation of reactive school-closure strategies. We develop an innovative hybrid survey-modeling framework to estimate the time-varying network of human social interactions. By integrating this network with an infection transmission model, we reduce the uncertainty surrounding the impact of school-closure policies in mitigating the spread of influenza. When the school-closure policy is in place, we measure a significant reduction in the number of contacts made by students (14.2 vs. 6.5 contacts per day) and workers (11.2 vs. 8.7 contacts per day). This reduction is not offset by the measured increase in the number of contacts between students and nonhousehold relatives. Model simulations suggest that gradual reactive school-closure policies based on monitoring student absenteeism rates are capable of mitigating influenza spread. We estimate that without the implemented reactive strategies the attack rate of the 2015 to 2016 influenza season would have been 33% larger. Our study sheds light on the social mixing patterns of the population during the implementation of reactive school closures and provides key instruments for future cost-effectiveness analyses of school-closure policies.


2020 ◽  
pp. 107699862094120
Author(s):  
Jean-Paul Fox ◽  
Jeremias Wenzel ◽  
Konrad Klotzke

Standard item response theory (IRT) models have been extended with testlet effects to account for the nesting of items; these are well known as (Bayesian) testlet models or random effect models for testlets. The testlet modeling framework has several disadvantages. A sufficient number of testlet items are needed to estimate testlet effects, and a sufficient number of individuals are needed to estimate testlet variance. The prior for the testlet variance parameter can only represent a positive association among testlet items. The inclusion of testlet parameters significantly increases the number of model parameters, which can lead to computational problems. To avoid these problems, a Bayesian covariance structure model (BCSM) for testlets is proposed, where standard IRT models are extended with a covariance structure model to account for dependences among testlet items. In the BCSM, the dependence among testlet items is modeled without using testlet effects. This approach does not imply any sample size restrictions and is very efficient in terms of the number of parameters needed to describe testlet dependences. The BCSM is compared to the well-known Bayesian random effects model for testlets using a simulation study. Specifically for testlets with a few items, a small number of test takers, or weak associations among testlet items, the BCSM shows more accurate estimation results than the random effects model.


2018 ◽  
Vol 11 (12) ◽  
pp. 4873-4888 ◽  
Author(s):  
Christopher J. Skinner ◽  
Tom J. Coulthard ◽  
Wolfgang Schwanghart ◽  
Marco J. Van De Wiel ◽  
Greg Hancock

Abstract. The evaluation and verification of landscape evolution models (LEMs) has long been limited by a lack of suitable observational data and statistical measures which can fully capture the complexity of landscape changes. This lack of data limits the use of objective function based evaluation prolific in other modelling fields, and restricts the application of sensitivity analyses in the models and the consequent assessment of model uncertainties. To overcome this deficiency, a novel model function approach has been developed, with each model function representing an aspect of model behaviour, which allows for the application of sensitivity analyses. The model function approach is used to assess the relative sensitivity of the CAESAR-Lisflood LEM to a set of model parameters by applying the Morris method sensitivity analysis for two contrasting catchments. The test revealed that the model was most sensitive to the choice of the sediment transport formula for both catchments, and that each parameter influenced model behaviours differently, with model functions relating to internal geomorphic changes responding in a different way to those relating to the sediment yields from the catchment outlet. The model functions proved useful for providing a way of evaluating the sensitivity of LEMs in the absence of data and methods for an objective function approach.


Sign in / Sign up

Export Citation Format

Share Document