true effect size
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 15)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Justin K Sheen ◽  
Johannes Haushofer ◽  
C. Jessica E. Metcalf ◽  
Lee Kennedy-Shaffer

To control the SARS-CoV-2 pandemic and future pathogen outbreaks requires an understanding of which non-pharmaceutical interventions are effective at reducing transmission. Observational studies, however, are subject to biases, even when there is no true effect. Cluster randomized trials provide a means to conduct valid hypothesis tests of the effect of interventions on community transmission. While they may only require a short duration, they often require large sample sizes to achieve adequate power. However, the sample sizes required for such tests in an outbreak setting are largely undeveloped and the question of whether these designs are practical remains unanswered. We develop approximate sample size formulae and simulation-based sample size methods for cluster randomized trials in infectious disease outbreaks. We highlight key relationships between characteristics of transmission and the enrolled communities and the required sample sizes, describe settings where cluster randomized trials powered to detect a meaningful true effect size may be feasible, and provide recommendations for investigators in planning such trials. The approximate formulae and simulation banks may be used by investigators to quickly assess the feasibility of a trial, and then more detailed methods may be used to more precisely size the trial. For example, we show that community-scale trials requiring 220 clusters with 100 tested individuals per cluster are powered to identify interventions that reduce transmission by 40% in one generation interval, using parameters identified for SARS-CoV-2 transmission. For more modest treatment effects, or settings with extreme overdispersion of transmission, however, much larger sample sizes are required.


2021 ◽  
Author(s):  
Kelsey Lucca ◽  
Arthur Capelier-Mourguy ◽  
Krista Byers-Heinlein ◽  
Laura Cirelli ◽  
Rodrigo Dal Ben ◽  
...  

Evaluating others’ actions as praiseworthy or blameworthy is a fundamental aspect of human nature. A seminal study published in 2007 suggested that the ability to form social evaluations based on third-party interactions emerges within the first year of life, considerably earlier than previously thought (Hamlin, Wynn, & Bloom, 2007). In this study, infants demonstrated a preference for a character (i.e., a shape with eyes) who helped, over one who hindered, another character who tried but failed to climb a hill. This study sparked a new line of inquiry into infants’ social evaluations; however, numerous attempts to replicate the original findings yielded mixed results, with some reporting effects not reliably different from chance. These failed replications point to at least two possibilities: (1) the original study may have overestimated the true effect size of infants’ preference for helpers, or (2) key methodological or contextual differences from the original study may have compromised the replication attempts. Here we present a pre-registered, closely coordinated, multi-laboratory, standardized study aimed at replicating the helping/hindering finding using a well-controlled video version of the hill show. We intended to (1) provide a precise estimate of the true effect size of infants’ preference for helpers over hinderers, and (2) determine the degree to which infants’ preferences are based on social features of the Helper/Hinderer scenarios. XYZ labs participated in the study yielding a total sample size of XYZ infants between the ages of 5.5 and 10.5 months. Brief summary of results will be added after data collection.


2021 ◽  
Author(s):  
Eileen Kranz Graham ◽  
Emily C Willroth ◽  
Sara J Weston ◽  
Graciela Muniz-Terrera ◽  
Sean Clouston ◽  
...  

Coordinated analysis is a powerful form of integrative analysis, and is well suited in its capacity to promote cumulative scientific knowledge, particularly in subfields of psychology that focus on the processes of lifespan development and aging. Coordinated analysis uses raw data from individual studies to create similar hypothesis tests for a given research question across multiple datasets, thereby making it less vulnerable to common criticisms of meta-analysis such as file drawer effects or publication bias. Coordinated analysis can sometimes use random effects meta-analysis to summarize results, which does not assume a single true effect size for a given statistical test. By fitting parallel models in separate datasets, coordinated analysis preserves the heterogeneity among studies, and provides a window into the generalizability and external validity of a set of results. The current paper achieves three goals: First, it describes the phases of a coordinated analysis so that interested researchers can more easily adopt these methods in their labs. Second, it discusses the importance of coordinated analysis within the context of the credibility revolution in psychology. Third, it encourages the use of existing data networks and repositories for conducting coordinated analysis, in order to enhance accessibility and inclusivity. Subfields of research that require time- or resource- intensive data collection, such as longitudinal aging research, would benefit by adopting these methods.


2021 ◽  
Author(s):  
Hilde Elisabeth Maria Augusteijn ◽  
Robbie Cornelis Maria van Aert ◽  
Marcel A. L. M. van Assen

Publication bias remains to be a great challenge when conducting a meta-analysis. It may result in overestimated effect sizes, increased frequency of false positives, and over- or underestimation of the effect size heterogeneity parameter. A new method is introduced, Bayesian Meta-Analytic Snapshot (BMAS), which evaluates both effect size and its heterogeneity and corrects for potential publication bias. It evaluates the probability of the true effect size being zero, small, medium or large, and the probability of true heterogeneity being zero, small, medium or large. This approach, which provides an intuitive evaluation of uncertainty in the evaluation of effect size and heterogeneity, is illustrated with a real-data example, a simulation study, and a Shiny web application of BMAS.


Author(s):  
Roy Baumeister

The artificial environment of a psychological laboratory experiment offers an excellent method for testing whether a causal relationship exists, — but it is mostly useless for predicting the size and power of such effects in normal life. In comparison with effects out in the world, laboratory effects are often artificially large, because the laboratory situation is set up precisely to capture this effect, with extraneous factors screened out. Equally problematic, laboratory effects are often artificially small, given practical and ethical constraints that make laboratory situations watered-down echoes of what happens in life. Furthermore, in many cases the very notion of a true effect size (as if it were constant across different manipulations and dependent variables) is absurd. These problems are illustrated with examples from the author’s own research programs. It is also revealing that experimental effect sizes, though often quite precisely calculated and proudly integrated into meta-analyses, have attracted almost zero attention in terms of substantive theory about human mental processes and behavior. At best, effect sizes from laboratory experiments provide information that could help other researchers to design their experiments, — but that means effect sizes are shop talk, not information about reality. It is recommended that researchers shift toward a more realistic appreciation of how little can be learned about human mind and behavior from effect sizes in laboratory studies.


2020 ◽  
Author(s):  
Robbie Cornelis Maria van Aert ◽  
Joris Mulder

Meta-analysis methods are used to synthesize results of multiple studies on the same topic. The most frequently used statistical model in meta-analysis is the random-effects model containing parameters for the average effect, between-study variance in primary study's true effect size, and random effects for the study specific effects. We propose Bayesian hypothesis testing and estimation methods using the marginalized random-effects meta-analysis (MAREMA) model where the study specific true effects are regarded as nuisance parameters which are integrated out of the model. A flat prior distribution is placed on the overall effect size in case of estimation and a proper unit information prior for the overall effect size is proposed in case of hypothesis testing. For the between-study variance in true effect size, a proper uniform prior is placed on the proportion of total variance that can be attributed to between-study variability. Bayes factors are used for hypothesis testing that allow testing point and one-sided hypotheses. The proposed methodology has several attractive properties. First, the proposed MAREMA model encompasses models with a zero, negative, and positive between-study variance, which enables testing a zero between-study variance as it is not a boundary problem. Second, the methodology is suitable for default Bayesian meta-analyses as it requires no prior information about the unknown parameters. Third, the methodology can even be used in the extreme case when only two studies are available, because Bayes factors are not based on large sample theory. We illustrate the developed methods by applying it to two meta-analyses and introduce easy-to-use software in the R package BFpack to compute the proposed Bayes factors.


Author(s):  
Yang Hai ◽  
Yalu Wen

Abstract Motivation Accurate disease risk prediction is essential for precision medicine. Existing models either assume that diseases are caused by groups of predictors with small-to-moderate effects or a few isolated predictors with large effects. Their performance can be sensitive to the underlying disease mechanisms, which are usually unknown in advance. Results We developed a Bayesian linear mixed model (BLMM), where genetic effects were modelled using a hybrid of the sparsity regression and linear mixed model with multiple random effects. The parameters in BLMM were inferred through a computationally efficient variational Bayes algorithm. The proposed method can resemble the shape of the true effect size distributions, captures the predictive effects from both common and rare variants, and is robust against various disease models. Through extensive simulations and the application to a whole-genome sequencing dataset obtained from the Alzheimer’s Disease Neuroimaging Initiatives, we have demonstrated that BLMM has better prediction performance than existing methods and can detect variables and/or genetic regions that are predictive. Availability The R-package is available at https://github.com/yhai943/BLMM Supplementary information Supplementary data are available at Bioinformatics online.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0241604
Author(s):  
Anna Karpińska-Mirecka ◽  
Joanna Bartosińska ◽  
Dorota Krasowska

Background The Dermatology Life Quality Index (DLQI) is commonly used to assess the quality of life of patients with skin diseases. Clinical trials confirm the positive effect of the use of biologics and new molecules on the quality of life of patients with plaque psoriasis. Main objectives Investigation of the effect of infliximab, adalimumab, ixekizumab, secukinumab and tofacitinib on Health-Related Quality of Life (HRQOL) measured by the DLQI in adult plaque psoriatic patients with respect to the patients’ race, type of used agent/placebo, agent’s dosage and treatment duration as well as the DLQI score prior to and after commencement of treatment. Material and methods Systematic literature searching for referential papers written in English using four databases: PubMed, EMBASE, Scopus, ClinicalTrials.gov as well as and manual searching (Google) Cochran’s (Q) and I2 tests were used for evaluation of heterogeneity or the degree of variation in the true effect size estimates between the analysed studies. The standardized mean difference (the SMD; Hedge’s g score) was applied to measure the differences between the two means (i.e. two groups: treated vs non-treated or treated vs placebo). The data coding and Hedge's g values were calculated according to the guidance of MetaXL software version 5.3. Main results 43 studies, in total 25,898 individuals, were evaluated by the DLQI and weighted mean scores were derived for the analysis. The mean DLQI scores ranged from 6.83 to 17.8 with the overall DLQI score of 12.12 (95%CI: 11.24 to 13.06). A random-effects model demonstrated significant considerable heterogeneity of the study results (I2 = 98%; p<0.001). Conclusion Infliximab, adalimumab, ixekizumab, secukinumab and tofacitinib in adult plaque psoriatic patients improved HRQOL measured by the DLQI. The patients with lower quality of life before treatment obtained better results.


2020 ◽  
Author(s):  
Julia M. Haaf ◽  
Jeffrey N. Rouder

The most prominent goal when conducting a meta-analysis is to estimate the true effect size across a set of studies. This approach is problematic whenever the analyzed studies are inconsistent, i.e. some studies show an effect in the predicted direction while others show no effect and still others show an effect in the opposite direction. In case of such an inconsistency, the average effect may be a product of a mixture of mechanisms. The first question in any meta-analysis should therefore be whether all studies show an effect in the same direction. To tackle this question a model with multiple ordinal constraints is proposed---one constraint for each study in the set. This "every study" model is compared to a set of alternative models, such as an unconstrained model that predicts effects in both directions. If the ordinal constraints hold, one underlying mechanism may suffice to explain the results from all studies. A major implication is then that average effects become interpretable. We illustrate the model-comparison approach using Carbajal et al.'s (2020) meta-analysis on the familiar-word-recognition effect, show how predictor analyses can be incorporated in the approach, and provide R-code for interested researchers. As common in meta-analysis, only surface statistics (such as effect size and sample size) are provided from each study, and the modeling approach can be adapted to suit these conditions.


Metabolites ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. 319 ◽  
Author(s):  
Christopher E. Gillies ◽  
Theodore S. Jennaro ◽  
Michael A. Puskarich ◽  
Ruchi Sharma ◽  
Kevin R. Ward ◽  
...  

To ensure scientific reproducibility of metabolomics data, alternative statistical methods are needed. A paradigm shift away from the p-value toward an embracement of uncertainty and interval estimation of a metabolite’s true effect size may lead to improved study design and greater reproducibility. Multilevel Bayesian models are one approach that offer the added opportunity of incorporating imputed value uncertainty when missing data are present. We designed simulations of metabolomics data to compare multilevel Bayesian models to standard logistic regression with corrections for multiple hypothesis testing. Our simulations altered the sample size and the fraction of significant metabolites truly different between two outcome groups. We then introduced missingness to further assess model performance. Across simulations, the multilevel Bayesian approach more accurately estimated the effect size of metabolites that were significantly different between groups. Bayesian models also had greater power and mitigated the false discovery rate. In the presence of increased missing data, Bayesian models were able to accurately impute the true concentration and incorporating the uncertainty of these estimates improved overall prediction. In summary, our simulations demonstrate that a multilevel Bayesian approach accurately quantifies the estimated effect size of metabolite predictors in regression modeling, particularly in the presence of missing data.


Sign in / Sign up

Export Citation Format

Share Document