scholarly journals Bayesian spatial and spatio-temporal approaches to modelling dengue fever: a systematic review

2018 ◽  
Vol 147 ◽  
Author(s):  
A. Aswi ◽  
S. M. Cramb ◽  
P. Moraga ◽  
K. Mengersen

AbstractDengue fever (DF) is one of the world's most disabling mosquito-borne diseases, with a variety of approaches available to model its spatial and temporal dynamics. This paper aims to identify and compare the different spatial and spatio-temporal Bayesian modelling methods that have been applied to DF and examine influential covariates that have been reportedly associated with the risk of DF. A systematic search was performed in December 2017, using Web of Science, Scopus, ScienceDirect, PubMed, ProQuest and Medline (via Ebscohost) electronic databases. The search was restricted to refereed journal articles published in English from January 2000 to November 2017. Thirty-one articles met the inclusion criteria. Using a modified quality assessment tool, the median quality score across studies was 14/16. The most popular Bayesian statistical approach to dengue modelling was a generalised linear mixed model with spatial random effects described by a conditional autoregressive prior. A limited number of studies included spatio-temporal random effects. Temperature and precipitation were shown to often influence the risk of dengue. Developing spatio-temporal random-effect models, considering other priors, using a dataset that covers an extended time period, and investigating other covariates would help to better understand and control DF transmission.

2020 ◽  
pp. 1-37
Author(s):  
Tal Yarkoni

Abstract Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned—that is, that the two must refer to roughly the same set of hypothetical observations. Here I argue that many applications of statistical inference in psychology fail to meet this basic condition. Focusing on the most widely used class of model in psychology—the linear mixed model—I explore the consequences of failing to statistically operationalize verbal hypotheses in a way that respects researchers' actual generalization intentions. I demonstrate that whereas the "random effect" formalism is used pervasively in psychology to model inter-subject variability, few researchers accord the same treatment to other variables they clearly intend to generalize over (e.g., stimuli, tasks, or research sites). The under-specification of random effects imposes far stronger constraints on the generalizability of results than most researchers appreciate. Ignoring these constraints can dramatically inflate false positive rates, and often leads researchers to draw sweeping verbal generalizations that lack a meaningful connection to the statistical quantities they are putatively based on. I argue that failure to take the alignment between verbal and statistical expressions seriously lies at the heart of many of psychology's ongoing problems (e.g., the replication crisis), and conclude with a discussion of several potential avenues for improvement.


2020 ◽  
pp. 1471082X2096691
Author(s):  
Amani Almohaimeed ◽  
Jochen Einbeck

Random effect models have been popularly used as a mainstream statistical technique over several decades; and the same can be said for response transformation models such as the Box–Cox transformation. The latter aims at ensuring that the assumptions of normality and of homoscedasticity of the response distribution are fulfilled, which are essential conditions for inference based on a linear model or a linear mixed model. However, methodology for response transformation and simultaneous inclusion of random effects has been developed and implemented only scarcely, and is so far restricted to Gaussian random effects. We develop such methodology, thereby not requiring parametric assumptions on the distribution of the random effects. This is achieved by extending the ‘Nonparametric Maximum Likelihood’ towards a ‘Nonparametric profile maximum likelihood’ technique, allowing to deal with overdispersion as well as two-level data scenarios.


2019 ◽  
Vol 23 (11) ◽  
pp. 4763-4781 ◽  
Author(s):  
Juan Ossa-Moreno ◽  
Greg Keir ◽  
Neil McIntyre ◽  
Michela Cameletti ◽  
Diego Rivera

Abstract. The accuracy of hydrological assessments in mountain regions is often hindered by the low density of gauges coupled with complex spatial variations in climate. Increasingly, spatial datasets (i.e. satellite and other products) and new computational tools are merged with ground observations to address this problem. This paper presents a comparison of approaches of different complexities to spatially interpolate monthly precipitation and daily temperature time series in the upper Aconcagua catchment in central Chile. A generalised linear mixed model (GLMM) whose parameters are estimated through approximate Bayesian inference is compared with simpler alternatives: inverse distance weighting (IDW), lapse rates (LRs), and two methods that analyse the residuals between observations and WorldClim (WC) data or Climate Hazards Group Infrared Precipitation with Station data (CHIRPS). The assessment is based on a leave-one-out cross validation (LOOCV), with the root-mean-squared error (RMSE) being the primary performance criterion for both climate variables, while the probability of detection (POD) and false-alarm ratio (FAR) are also used for precipitation. Results show that for spatial interpolation of temperature and precipitation, the approaches based on the WorldClim or CHIRPS residuals may be recommended as being more accurate, easy to apply and relatively robust to tested reductions in the number of estimation gauges. The GLMM has comparable performance when all gauges were included and is better for estimating occurrence of precipitation but is more sensitive to the reduction in the number of gauges used for estimation, which is a constraint in sparsely monitored catchments.


2015 ◽  
Vol 35 (9) ◽  
pp. 1488-1501 ◽  
Author(s):  
B. Ganguli ◽  
S. Sen Roy ◽  
M. Naskar ◽  
E. J. Malloy ◽  
E. A. Eisen

Author(s):  
Tal Yarkoni

Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned—that is, that the two must refer to roughly the same set of hypothetical observations. Here I argue that many applications of statistical inference in psychology fail to meet this basic condition. Focusing on the most widely used class of model in psychology—the linear mixed model—I explore the consequences of failing to statistically operationalize verbal hypotheses in a way that respects researchers' actual generalization intentions. I demonstrate that whereas the "random effect" formalism is used pervasively in psychology to model inter-subject variability, few researchers accord the same treatment to other variables they clearly intend to generalize over (e.g., stimuli, tasks, or research sites). The under-specification of random effects imposes far stronger constraints on the generalizability of results than most researchers appreciate. Ignoring these constraints can dramatically inflate false positive rates, and often leads researchers to draw sweeping verbal generalizations that lack a meaningful connection to the statistical quantities they are putatively based on. I argue that the failure to problems many of psychology's ongoing problems (e.g., the replication crisis), and conclude with a discussion of several potential avenues for improvement.


2018 ◽  
Author(s):  
Juan Ossa-Moreno ◽  
Greg Keir ◽  
Neil McIntyre ◽  
Michela Cameletti ◽  
Diego Rivera

Abstract. The accuracy of hydrological assessments in mountain regions is often hindered by the low density of gauges, coupled with complex spatial variations in climate. Increasingly, spatial data sets (i.e. satellite and gridded products) and new computational tools are used to address this problem, by assisting with the spatial interpolation of ground observations. This paper presents a comparison of approaches of different complexity to spatially interpolate precipitation and temperature time-series in the upper Aconcagua catchment in central Chile. A Generalised Linear Mixed Model whose parameters are estimated through approximate Bayesian inference is compared with three simpler alternatives: Inverse Distance Weighting, Lapse Rates and a method based on WorldClim data. The assessment is based on a leave-one-out cross validation, with the Root Mean Squared Error being the primary performance criterion for both climate variables, while Probability of Detection and False Alarm Ratio are also used for precipitation. Results show that for spatial interpolation of the expected values of temperature and precipitation, the WorldClim approach may be recommended as being the more accurate, easy to apply and relatively more robust to tested reductions in the number of estimation gauges, particularly for temperature. The Generalised Linear Mixed Model has comparable performance when all gauges were included, but is more sensitive to the reduction in the number of gauges used for estimation, which is a constraint in sparsely monitored catchments.


2021 ◽  
pp. bmjqs-2021-013721
Author(s):  
Mohamad Ghazi Fakih ◽  
Allison Ottenbacher ◽  
Baligh Yehia ◽  
Richard Fogel ◽  
Collin Miller ◽  
...  

BackgroundThe associated mortality with COVID-19 has improved compared with the early pandemic period. The effect of hospital COVID-19 patient prevalence on COVID-19 mortality has not been well studied.MethodsWe analysed data for adults with confirmed SARS-CoV-2 infection admitted to 62 hospitals within a multistate health system over 12 months. Mortality was evaluated based on patient demographic and clinical risk factors, COVID-19 hospital prevalence and calendar time period of the admission, using a generalised linear mixed model with site of care as the random effect.Results38 104 patients with COVID-19 were hospitalised, and during their encounters, the prevalence of COVID-19 averaged 16% of the total hospitalised population. Between March–April 2020 and January–February 2021, COVID-19 mortality declined from 19% to 12% (p<0.001). In the adjusted multivariable analysis, mid and high COVID-19 inpatient prevalence were associated with a 25% and 41% increase in the odds (absolute contribution to probability of death of 2%–3%) of COVID-19 mortality compared with patients with COVID-19 in facilities with low prevalence (<10%), respectively (high prevalence >25%: adjusted OR (AOR) 1.41, 95% CI 1.23 to 1.61; mid-prevalence (10%–25%): AOR 1.25, 95% CI 1.13 to 1.38). Mid and high COVID-19 prevalence accounted for 76% of patient encounters.ConclusionsAlthough inpatient mortality for patients with COVID-19 has sharply declined compared with earlier in the pandemic, higher COVID-19 hospital prevalence remained a common risk factor for COVID-19 mortality. Hospital leaders need to reconsider how we provide support to care for patients in times of increased volume and complexity, such as those experienced during COVID-19 surges.


Author(s):  
Giulia Vannucci ◽  
Anna Gottard ◽  
Leonardo Grilli ◽  
Carla Rampichini

Mixed or multilevel models exploit random effects to deal with hierarchical data, where statistical units are clustered in groups and cannot be assumed as independent. Sometimes, the assumption of linear dependence of a response on a set of explanatory variables is not plausible, and model specification becomes a challenging task. Regression trees can be helpful to capture non-linear effects of the predictors. This method was extended to clustered data by modelling the fixed effects with a decision tree while accounting for the random effects with a linear mixed model in a separate step (Hajjem & Larocque, 2011; Sela & Simonoff, 2012). Random effect regression trees are shown to be less sensitive to parametric assumptions and provide improved predictive power compared to linear models with random effects and regression trees without random effects. We propose a new random effect model, called Tree embedded linear mixed model, where the regression function is piecewise-linear, consisting in the sum of a tree component and a linear component. This model can deal with both non-linear and interaction effects and cluster mean dependencies. The proposal is the mixed effect version of the semi-linear regression trees (Vannucci, 2019; Vannucci & Gottard, 2019). Model fitting is obtained by an iterative two-stage estimation procedure, where both the fixed and the random effects are jointly estimated. The proposed model allows a decomposition of the effect of a given predictor within and between clusters. We will show via a simulation study and an application to INVALSI data that these extensions improve the predictive performance of the model in the presence of quasi-linear relationships, avoiding overfitting, and facilitating interpretability.


2021 ◽  
pp. 1-26
Author(s):  
Traci A. Bekelman ◽  
Corby K. Martin ◽  
Susan L. Johnson ◽  
Deborah H. Glueck ◽  
Katherine A. Sauder ◽  
...  

Abstract The limitations of self-report measures of dietary intake are well known. Novel, technology-based measures of dietary intake may provide a more accurate, less burdensome alternative to existing tools. The first objective of this study was to compare participant burden for two technology-based measures of dietary intake among school-age children: the Automated-Self Administered 24-hour Dietary Assessment Tool-2018 (ASA24-2018) and the Remote Food Photography Method (RFPM). The second objective was to compare reported energy intake for each method to the Estimated Energy Requirement for each child, as a benchmark for actual intake. Forty parent-child dyads participated in 2, 3-day dietary assessments: a parent proxy-reported version of the ASA24 and the RFPM. A parent survey was subsequently administered to compare satisfaction, ease of use and burden with each method. A linear mixed model examined differences in total daily energy intake (TDEI) between assessments, and between each assessment method and the EER. Reported energy intake was 379 kcal higher with the ASA24 than the RFPM (p=0.0002). Reported energy intake with the ASA24 was 231 kcal higher than the EER (p = 0.008). Reported energy intake with the RFPM did not differ significantly from the EER (difference in predicted means = −148 kcal, p = 0.09). Median satisfaction and ease of use scores were 5 out of 6 for both methods. A higher proportion of parents reported that the ASA24 was more time consuming than the RFPM (74.4% vs. 25.6%, p = 0.002). Utilization of both methods is warranted given their high satisfaction among parents.


Sign in / Sign up

Export Citation Format

Share Document