high statistical power
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 15)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Sarah Rennie ◽  
Daniel Heidar Magnusson ◽  
Robin Andersson

RNA editing by ADAR (adenosine deaminase acting on RNA) is gaining an increased interest in the field of post-transcriptional regulation. Fused to an RNA-binding protein (RBP) of interest, the catalytic activity of ADAR results in A-to-I RNA edits, whose identification will determine RBP-bound RNA transcripts. However, the computational tools available for their identification and differential RNA editing statistical analysis are limited or too specialised for general-purpose usage. Here we present hyperTRIBER, a flexible suite of tools, wrapped into a convenient R package, for the detection of differential RNA editing. hyperTRIBER is applicable to complex scenarios and experimental designs, and provides a robust statistical framework allowing for the control for coverage of reads at a given base, the total expression level and other co-variates. We demonstrate the capabilities of our approach on HyperTRIBE RNA-seq data for the detection of bound RNAs by the N6-methyladenosine (m6A) reader protein ECT2 in Arabidopsis roots. We show that hyperTRIBER finds edits with a high statistical power, even where editing proportions and RNA transcript expression levels are low, together demonstrating its usability and versatility for analysing differential RNA editing.


2021 ◽  
Author(s):  
Frank Zenker ◽  
Erich H. Witte

The development of an empirically adequate theoretical construct for a given phenomenon of interest requires an estimate of the population effect size, aka the true effect. Arriving at this estimate in evidence-based ways presupposes access to robust experimental or observational findings, defined as statistically significant test-results with high statistical power. In the behavioral sciences, however, even the best journals typically publish statistically significant test-results with insufficient statistical power, entailing that such findings have insufficient replication probability. Whereas a robust finding formally requires that an empirical study engage with point-specific H0- and H1-hypotheses, behavioral scientists today typically point-specify only the H0, and instead engage a composite (directional) H1. This mismatch renders the prospects for theory-construction poor, because the population effect size—the very parameter that is to be modelled—regularly remains unknown. This can only keep from developing empirically adequate theoretical constructs. Based on the research program strategy (RPS), a sophisticated integration of Frequentist and Bayesian statistical inference elements, here we claim that theoretical progress requires engaging with point-H1-hypotheses by default.


2021 ◽  
Author(s):  
Yangqing Deng ◽  
Wei Pan

It is of great interest and potential to discover causal relationships between pairs of exposures and outcomes using genetic variants as instrumental variables (IVs) to deal with hidden confounding in observational studies. Two most popular approaches are Mendelian randomization (MR), which usually use independent genetic variants/SNPs across the genome, and transcriptome-wide association studies (TWAS) using cis-SNPs local to a gene, as IVs. In spite of their many promising applications, both approaches face a major challenge: the validity of their causal conclusions depends on three critical assumptions on valid IVs, which however may not hold in practice. The most likely as well as challenging situation is due to the wide-spread horizontal pleiotropy, leading to two of three IV assumptions being violated and thus to biased statistical inference. Although some methods have been proposed as being robust to various degrees to the violation of some modeling assumptions, they often give different and even conflicting results due to their own modeling assumptions and possibly lower statistical efficiency, imposing difficulties to the practitioner in choosing and interpreting varying results across different methods. Hence, it would help to directly test whether any assumption is violated or not. In particular, there is a lack of such tests for TWAS. We propose a new and general GOF test, called TEDE (TEsting Direct Effects), applicable to both correlated and independent SNPs/IVs (as commonly used in TWAS and MR respectively). Through simulation studies and real data examples, we demonstrate high statistical power and advantages of our new method, while confirming the frequent violation of modeling (including IV) assumptions in practice and thus the importance of model checking by applying such a test in MR/TWAS analysis.


2021 ◽  
Author(s):  
Guangyi Gao ◽  
Byron J Gajewski ◽  
Jo Wick ◽  
Jonathan Beall ◽  
Jeffrey L Saver ◽  
...  

Abstract Background: Platform trials are well-known for their ability to investigate multiple arms on heterogeneous patient populations and their flexibility to add/drop treatment arms due to efficacy/lack of efficacy. Because of their complexity, it is important to develop highly optimized, transparent, and rigorous designs that are cost-efficient, offer high statistical power, maximize patient benefit and are robust to changes over time. Methods: To address these needs, we present a Bayesian platform trial design based on a Beta-Binomial model for binary outcomes that uses three key strategies: 1) Hierarchical modelling of subgroups within treatment arms that allows for borrowing of information across subgroups, 2) utilization of response-adaptive randomization (RAR) schemes that seek a tradeoff between statistical power and patient benefit, and 3) adjustment for potential drift over time. Motivated by a proposed clinical trial that aims to find the appropriate treatment for different subgroup populations of ischemic stroke patients, extensive simulation studies were performed to validate the approach, compare different allocation rules and study the model operating characteristics.Results & Conclusions: Our proposed approach achieved high statistical power, good patient benefit and was also robust against population drift over time. Our design provided a nice balance between the strengths of both the traditional RAR scheme and fixed 1:1 allocation and may be a promising choice for dichotomous outcomes trials investigating multiple subgroups.


2020 ◽  
Author(s):  
Rungrawee Mongkolrob ◽  
Phuntila Tharabenjasin ◽  
Aporn Bualuang ◽  
Noel Pabalan

Abstract The genetics of cancer metastasis is important for designing optimal therapeutic strategies. The lysyl oxidase (LOX) gene has been found important in the metastatic process, with roles in setting the microenvironment for future metastatic sites. Associations between the LOX polymorphisms (473G/A and -22G/C) have been examined in several studies, however, results were inconsistent, prompting a meta-analysis in order to obtain more precise estimates.Searches of six databases yielded 14 articles (15 studies) that examined associations of 473G/A and -22G/C with cancer. We examined five cancer groups: breast, lung, bone (osteosarcoma), GIC (gastrointestinal cancers) and GYC (gynecological cancers). For each cancer group, we calculated pooled odds ratios (ORs) and 95% confidence intervals (CIs) using standard genetic models. High significance (Pa < 0.00001), homogeneity (I2 = 0%) and high precision of effects (CI difference < 1.0 [upper CI-lower CI]) comprised the three criteria for strength of evidence (SOE). Multiple comparisons were Bonferroni-corrected. Sensitivity analysis assessed robustness of the outcomes.Thirteen significant associations indicating increased risk (OR > 1.00) were found in all cancer groups except breast (Pa = 0.10-0.91). Of the 13, two were in osteosarcoma where the -22G/C effects (ORs 4.05-4.07, 95% CIs 1.30-12.70, Pa = 0.02) were homogeneous (I2 = 0%) but imprecise (CIDs 11.4) and did not survive the Bonferroni correction. In contrast, the Bonferroni-surviving dominant/codominant outcomes in lung cancer (OR 1.44, 95% CI 1.19-1.74) and GYC (ORs 1.52-1.62, 95% CIs 1.26-1.88) met all three SOE criteria (Pa = 0.00001, I2 = 0%, CIDs 0.49-0.56).In summary, associations of LOX 473G/A with lung, ovarian and cervical cancers indicate 1.4-1.6-fold increased risks. These outcomes were underpinned by robustness and high statistical power at the aggregate level.


2020 ◽  
Author(s):  
John Protzko ◽  
Jon Krosnick ◽  
Leif D. Nelson ◽  
Brian A. Nosek ◽  
Jordan Axt ◽  
...  

Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of optimal methods or whether presumptively optimal methods are not, in fact, optimal. This paper reports an investigation by four coordinated laboratories of the prospective replicability of 16 novel experimental findings using current optimal practices: high statistical power, preregistration, and complete methodological transparency. In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing (p&lt;.05) in 86% of attempts, slightly exceeding maximum expected replicability based on observed effect size and sample size. When one lab attempted to replicate an effect discovered by another lab, the effect size in the replications was 97% that of the original study. This high replication rate justifies confidence in rigor enhancing methods and suggests that past failures to replicate may be attributable to departures from optimal procedures.


Author(s):  
Medhat Mahmoud ◽  
Ngoc-Thuy Ha ◽  
Henner Simianer ◽  
Timothy Beissinger

AbstractIdentifying selection on polygenic complex traits in crops and livestock is key to understanding evolution and helps prioritize important characteristics for breeding. However, the QTL that contribute to polygenic trait variation exhibit small or infinitesimal effects. This hinders the ability to detect QTL controlling polygenic traits because enormously high statistical power is needed for their detection. Recently, we circumvented this challenge by introducing a method to identify selection on complex traits by evaluating the relationship between genome-wide changes in allele frequency and estimates of effect-size. The method involves calculating a composite-statistic across all markers that captures this relationship, followed by implementing a linkage disequilibrium-aware permutation test to evaluate if the observed pattern differs from that expected due to drift during evolution and population stratification. In this manuscript, we describe “Ghat”, an R package developed to implement the method to test for selection on polygenic traits. We demonstrate the package by applying it to test for polygenic selection on 15 published European winter wheat traits including yield, biomass, quality, morphological characteristics, and disease resistance traits. The results highlight the power of Ghat to identify selection on complex traits. The Ghat package is accessible on CRAN, The Comprehensive R Archival Network, and on GitHub.


2020 ◽  
Author(s):  
Rosalie Bruel ◽  
Easton R. White

AbstractEnvironmental monitoring is a key component of understanding and managing ecosystems. Given that most monitoring efforts are still expensive and time-consuming, it is essential that monitoring programs are designed to be efficient and effective. In many situations, the expensive part of monitoring is not sample collection, but instead sample processing, which leads to only a subset of the samples being processed. For example, sediment or ice cores can be quickly obtained in the field, but they require weeks or months of processing in a laboratory setting. Standard sub-sampling approaches often involve equally-spaced sampling. We use simulations to show how many samples, and which types of sampling approaches, are the most effective in detecting ecosystem change. We test these ideas with a case study of Cladocera community assemblage reconstructed from a sediment core. We demonstrate that standard approaches to sample processing are less efficient than an iterative approach. For our case study, using an optimal sampling approach would have resulted in savings of 195 person-hours—thousands of dollars in labor costs. We also show that, compared with these standard approaches, fewer samples are typically needed to achieve high statistical power. We explain how our approach can be applied to monitoring programs that rely on video records, eDNA, remote sensing, and other common tools that allow re-sampling.


Author(s):  
Fidel Alfaro-Almagro ◽  
Paul McCarthy ◽  
Soroosh Afyouni ◽  
Jesper L. R. Andersson ◽  
Matteo Bastiani ◽  
...  

AbstractDealing with confounds is an essential step in large cohort studies to address problems such as unexplained variance and spurious correlations. UK Biobank is a powerful resource for studying associations between imaging and nonimaging measures such as lifestyle factors and health outcomes, in part because of the large subject numbers. However, the resulting high statistical power also raises the sensitivity to confound effects, which therefore have to be carefully considered. In this work we describe a set of possible confounds (including non-linear effects and interactions) that researchers may wish to consider for their studies using such data. We include descriptions of how we can estimate the confounds, and study the extent to which each of these confounds affects the data, and the spurious correlations that may arise if they are not controlled. Finally, we discuss several issues that future studies should consider when dealing with confounds.


The percentage of recidivism reduction projected to reduce recidivism by each social service or intervention is presented. Meta-analysis is used to determine these projections. However, in the last few years meta-analysis methods have been questioned. The most formidable criticism of meta-analysis is found in a master study of the statistical power of studies in criminology. This master study (of over 8,000 studies) found that about 25% of studies in criminology exhibited high statistical power (in the 0.99 to 1.00 range). However, the study also found about 25% of studies had power between 0.01 and 0.24. These findings suggested that roughly a fourth of all studies in criminology have levels of statistical power that make it nearly impossible to identify the effects they are estimating. In other words, this chapter questions whether we should be confident in the recidivism reduction projections for various interventions.


Sign in / Sign up

Export Citation Format

Share Document