scholarly journals Mapping pleiotropic loci using a fast-sequential testing algorithm

Author(s):  
Fernando M. Aguate ◽  
Ana I. Vazquez ◽  
Tony R. Merriman ◽  
Gustavo de los Campos

AbstractPleiotropy (i.e., genes with effects on multiple traits) leads to genetic correlations between traits and contributes to the development of many syndromes. Identifying variants with pleiotropic effects on multiple health-related traits can improve the biological understanding of gene action and disease etiology, and can help to advance disease-risk prediction. Sequential testing is a powerful approach for mapping genes with pleiotropic effects. However, the existing methods and the available software do not scale to analyses involving millions of SNPs and large datasets. This has limited the adoption of sequential testing for pleiotropy mapping at large scale. In this study, we present a sequential test and software that can be used to test pleiotropy in large systems of traits with biobank-sized data. Using simulations, we show that the methods implemented in the software are powerful and have adequate type-I error rate control. To demonstrate the use of the methods and software, we present a whole-genome scan in search of loci with pleiotropic effects on seven traits related to metabolic syndrome (MetS) using UK-Biobank data (n~300 K distantly related white European participants). We found abundant pleiotropy and report 170, 44, and 18 genomic regions harboring SNPs with pleiotropic effects in at least two, three, and four of the seven traits, respectively. We validate our results using previous studies documented in the GWAS-catalog and using data from GTEx. Our results confirm previously reported loci and lead to several novel discoveries that link MetS-related traits through plausible biological pathways.

2019 ◽  
Author(s):  
Alvin Vista

Cheating detection is an important issue in standardized testing, especially in large-scale settings. Statistical approaches are often computationally intensive and require specialised software to conduct. We present a two-stage approach that quickly filters suspected groups using statistical testing on an IRT-based answer-copying index. We also present an approach to mitigate data contamination and improve the performance of the index. The computation of the index was implemented through a modified version of an open source R package, thus enabling wider access to the method. Using data from PIRLS 2011 (N=64,232) we conduct a simulation to demonstrate our approach. Type I error was well-controlled and no control group was falsely flagged for cheating, while 16 (combined n=12,569) of the 18 (combined n=14,149) simulated groups were detected. Implications for system-level cheating detection and further improvements of the approach were discussed.


PLoS Genetics ◽  
2021 ◽  
Vol 17 (11) ◽  
pp. e1009922
Author(s):  
Zhaotong Lin ◽  
Yangqing Deng ◽  
Wei Pan

With the increasing availability of large-scale GWAS summary data on various traits, Mendelian randomization (MR) has become commonly used to infer causality between a pair of traits, an exposure and an outcome. It depends on using genetic variants, typically SNPs, as instrumental variables (IVs). The inverse-variance weighted (IVW) method (with a fixed-effect meta-analysis model) is most powerful when all IVs are valid; however, when horizontal pleiotropy is present, it may lead to biased inference. On the other hand, Egger regression is one of the most widely used methods robust to (uncorrelated) pleiotropy, but it suffers from loss of power. We propose a two-component mixture of regressions to combine and thus take advantage of both IVW and Egger regression; it is often both more efficient (i.e. higher powered) and more robust to pleiotropy (i.e. controlling type I error) than either IVW or Egger regression alone by accounting for both valid and invalid IVs respectively. We propose a model averaging approach and a novel data perturbation scheme to account for uncertainties in model/IV selection, leading to more robust statistical inference for finite samples. Through extensive simulations and applications to the GWAS summary data of 48 risk factor-disease pairs and 63 genetically uncorrelated trait pairs, we showcase that our proposed methods could often control type I error better while achieving much higher power than IVW and Egger regression (and sometimes than several other new/popular MR methods). We expect that our proposed methods will be a useful addition to the toolbox of Mendelian randomization for causal inference.


2015 ◽  
Author(s):  
Daria Zhernakova ◽  
Patrick Deelen ◽  
Martijn Vermaat ◽  
Maarten van Iterson ◽  
Michiel van Galen ◽  
...  

Genetic risk factors often localize in non-coding regions of the genome with unknown effects on disease etiology. Expression quantitative trait loci (eQTLs) help to explain the regulatory mechanisms underlying the association of genetic risk factors with disease. More mechanistic insights can be derived from knowledge of the context, such as cell type or the activity of signaling pathways, influencing the nature and strength of eQTLs. Here, we generated peripheral blood RNA-seq data from 2,116 unrelated Dutch individuals and systematically identified these context-dependent eQTLs using a hypothesis-free strategy that does not require prior knowledge on the identity of the modifiers. Out of the 23,060 significant cis-regulated genes (false discovery rate ≤ 0.05), 2,743 genes (12%) show context-dependent eQTL effects. The majority of those were influenced by cell type composition, revealing eQTLs that are particularly strong in cell types such as CD4+ T-cells, erythrocytes, and even lowly abundant eosinophils. A set of 145 cis-eQTLs were influenced by the activity of the type I interferon signaling pathway and we identified several cis-eQTLs that are modulated by specific transcription factors that bind to the eQTL SNPs. This demonstrates that large-scale eQTL studies in unchallenged individuals can complement perturbation experiments to gain better insight in regulatory networks and their stimuli.


Biostatistics ◽  
2019 ◽  
Author(s):  
Lu Wang ◽  
Ying Huang ◽  
Ziding Feng

Summary Candidate biomarkers discovered in the laboratory need to be rigorously validated before advancing to clinical application. However, it is often expensive and time-consuming to collect the high quality specimens needed for validation; moreover, such specimens are often limited in volume. The Early Detection Research Network has developed valuable specimen reference sets that can be used by multiple labs for biomarker validation. To optimize the chance of successful validation, it is critical to efficiently utilize the limited specimens in these reference sets on promising candidate biomarkers. Towards this end, we propose a novel two-stage validation strategy that partitions the samples in the reference set into two groups for sequential validation. The proposed strategy adopts the group sequential testing method to control for the type I error rate and rotates group membership to maximize the usage of available samples. We develop analytical formulas for performance parameters of this strategy in terms of the expected numbers of biomarkers that can be evaluated and the truly useful biomarkers that can be successfully validated, which can provide valuable guidance for future study design. The performance of our proposed strategy for validating biomarkers with respect to the points on the receiver operating characteristic curve are evaluated via extensive simulation studies and compared with the default strategy of validating each biomarker using all samples in the reference set. Different types of early stopping rules and boundary shapes in the group sequential testing method are considered. Compared with the default strategy, our proposed strategy makes more efficient use of the limited resources in the reference set by allowing more candidate biomarkers to be evaluated, giving a better chance of having truly useful biomarkers successfully validated.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Guojun Hou ◽  
Isaac T. W. Harley ◽  
Xiaoming Lu ◽  
Tian Zhou ◽  
Ning Xu ◽  
...  

AbstractSince most variants that impact polygenic disease phenotypes localize to non-coding genomic regions, understanding the consequences of regulatory element variants will advance understanding of human disease mechanisms. Here, we report that the systemic lupus erythematosus (SLE) risk variant rs2431697 as likely causal for SLE through disruption of a regulatory element, modulating miR-146a expression. Using epigenomic analysis, genome-editing and 3D chromatin structure analysis, we show that rs2431697 tags a cell-type dependent distal enhancer specific for miR-146a that physically interacts with the miR-146a promoter. NF-kB binds the disease protective allele in a sequence-specific manner, increasing expression of this immunoregulatory microRNA. Finally, CRISPR activation-based modulation of this enhancer in the PBMCs of SLE patients attenuates type I interferon pathway activation by increasing miR-146a expression. Our work provides a strategy to define non-coding RNA functional regulatory elements using disease-associated variants and provides mechanistic links between autoimmune disease risk genetic variation and disease etiology.


2021 ◽  
Author(s):  
Vasyl Zhabotynsky ◽  
Licai Huang ◽  
Paul Little ◽  
Yijuan Hu ◽  
Fernando F Pardo Manuel de Villena ◽  
...  

Using information from allele-specific gene expression (ASE) can substantially improve the power to map gene expression quantitative trait loci (eQTLs). However, such practice has been limited, partly due to high computational cost and the requirement to access raw data that can take a large amount of storage space. To address these computational challenges, we have developed a computational framework that uses a statistical method named TReCASE as its computational engine, and it is computationally feasible for large scale analysis. We applied it to map eQTLs in 28 human tissues using the data from the Genotype-Tissue Expression (GTEx) project. Compared with a popular linear regression method that does not use ASE data, TReCASE can double the number of eGenes (i.e., genes with at least one significant eQTL) when sample size is relatively small, e.g., $n=200$. We also demonstrated how to use the ASE data that we have collected to study dynamic eQTLs whose effect sizes vary with respect to another variable, such as age. We find the majority of such dynamic eQTLs are due to some underlying latent factors, such as cell type proportions. We further compare TReCASE versus another method RASQUAL. TReCASE is ten times or more faster than RASQUAL and it provides more robust type I error control.


2019 ◽  
Author(s):  
Zhongshang Yuan ◽  
Huanhuan Zhu ◽  
Ping Zeng ◽  
Sheng Yang ◽  
Shiquan Sun ◽  
...  

AbstractIntegrating association results from both genome-wide association studies (GWASs) and expression quantitative trait locus (eQTL) mapping studies has the potential to shed light on the molecular mechanisms underlying disease etiology. Several statistical methods have been recently developed to integrate GWASs with eQTL studies in the form of transcriptome-wide association studies (TWASs). These existing methods can all be viewed as a form of two sample Mendelian randomization (MR) analysis, which has been widely applied in various GWASs for inferring the causal relationship among complex traits. Unfortunately, most existing TWAS and MR methods make an unrealistic modeling assumption and assume that instrumental variables do not exhibit horizontal pleiotropic effects. However, horizontal pleiotropic effects have been recently discovered to be wide spread across complex traits, and, as we will show here, are also wide spread across gene expression traits. Therefore, not allowing for horizontal pleiotropic effects can be overly restrictive, and, as we will be show here, can lead to a substantial inflation of test statistics and subsequently false discoveries in TWAS applications. Here, we present a probabilistic MR method, which we refer to as PMR-Egger, for testing and controlling for horizontal pleiotropic effects in TWAS applications. PMR-Egger relies on an MR likelihood framework that unifies many existing TWAS and MR methods, accommodates multiple correlated instruments, tests the causal effect of gene on trait in the presence of horizontal pleiotropy, and, with a newly developed parameter expansion version of the expectation maximization algorithm, is scalable to hundreds of thousands of individuals. With extensive simulations, we show that PMR-Egger provides calibrated type I error control for causal effect testing in the presence of horizontal pleiotropic effects, is reasonably robust for various types of horizontal pleiotropic effect mis-specifications, is more powerful than existing MR approaches, and, as a by-product, can directly test for horizontal pleiotropy. We illustrate the benefits of PMR-Egger in applications to 39 diseases and complex traits obtained from three GWASs including the UK Biobank. In these applications, we show how PMR-Egger can lead to new biological discoveries through integrative analysis.


2019 ◽  
Vol 16 (1) ◽  
Author(s):  
Chengyou Liu ◽  
Leilei Zhou ◽  
Yuhe Wang ◽  
Shuchang Tian ◽  
Junlin Zhu ◽  
...  

AbstractVariations of gene expression levels play an important role in tumors. There are numerous methods to identify differentially expressed genes in high-throughput sequencing. Several algorithms endeavor to identify distinctive genetic patterns susceptable to particular diseases. Although these processes have been proved successful, the probability that the number of non-differentially expressed genes measured by false discovery rate (FDR) has a large standard deviation, and the misidentification rate (type I error) grows rapidly when the number of genes to be detected become larger. In this study we developed a new method, Unit Gamma Measurement (UGM), accounting for multiple hypotheses test statistics distribution, which could reduce the dependency problem. Simulated expression profile data and breast cancer RNA-Seq data were utilized to testify the accuracy of UGM. The results show that the number of non-differentially expressed genes identified by the UGM is very close to the real-evidence data, and the UGM also has a smaller standard error, range, quartile range and RMS error. In addition, the UGM can be used to screen many breast cancer-associated genes, such as BRCA1, BRCA2, PTEN, BRIP1, etc., provides better accuracy, robustness and efficiency, the method of identification differentially expressed genes in high-throughput sequencing.


2016 ◽  
Vol 2 (2) ◽  
pp. 290
Author(s):  
Ying Jin ◽  
Hershel Eason

<p>The effects of mean ability difference (MAD) and short tests on the performance of various DIF methods have been studied extensively in previous simulation studies. Their effects, however, have not been studied under multilevel data structure. MAD was frequently observed in large-scale cross-country comparison studies where the primary sampling units were more likely to be clusters (<em>e.g.</em>, schools). With short tests, regular DIF methods under MAD-present conditions might suffer from inflated type I error rate due to low reliability of test scores, which would adversely impact the matching ability of the covariate (<em>i.e.</em>, the total score) in DIF analysis. The current study compared the performance of three DIF methods: logistic regression (LR), hierarchical logistic regression (HLR) taking multilevel structure into account, and hierarchical logistic regression with latent covariate (HLR-LC) taking multilevel structure into account as well as accounting for low reliability and MAD. The results indicated that HLR-LC outperformed both LR and HLR under most simulated conditions, especially under the MAD-present conditions when tests were short. Practical implications of the implementation of HLR-LC were also discussed. <strong></strong></p>


2019 ◽  
Vol 45 (3) ◽  
pp. 251-273 ◽  
Author(s):  
Carmen Köhler ◽  
Alexander Robitzsch ◽  
Johannes Hartig

Testing whether items fit the assumptions of an item response theory model is an important step in evaluating a test. In the literature, numerous item fit statistics exist, many of which show severe limitations. The current study investigates the root mean squared deviation (RMSD) item fit statistic, which is used for evaluating item fit in various large-scale assessment studies. The three research questions of this study are (1) whether the empirical RMSD is an unbiased estimator of the population RMSD; (2) if this is not the case, whether this bias can be corrected; and (3) whether the test statistic provides an adequate significance test to detect misfitting items. Using simulation studies, it was found that the empirical RMSD is not an unbiased estimator of the population RMSD, and nonparametric bootstrapping falls short of entirely eliminating this bias. Using parametric bootstrapping, however, the RMSD can be used as a test statistic that outperforms the other approaches—infit and outfit, S − X 2—with respect to both Type I error rate and power. The empirical application showed that parametric bootstrapping of the RMSD results in rather conservative item fit decisions, which suggests more lenient cut-off criteria.


Sign in / Sign up

Export Citation Format

Share Document