Haplotype-based analysis of selective sweeps in sheep

Genome ◽  
2014 ◽  
Vol 57 (8) ◽  
pp. 433-437 ◽  
Author(s):  
James W. Kijas

Domestic animals represent an extremely useful model for linking genotypic and phenotypic variation. One approach involves identifying allele frequency differences between populations, using FST, to detect selective sweeps. While simple to calculate, FST may generate false positives due to aspects of population history. This prompted the development of hapFLK, a metric that measures haplotype differentiation while accounting for the genetic relationship between populations. The focus of this paper was to apply hapFLK in sheep with available SNP50 genotypes. The hapFLK approach identified a known selective sweep on chromosome 10 with high precision. Further, five regions were identified centered on genes with strong evidence for positive selection (COL1A2, NCAPG, LCORL, and RXFP2). Estimation of global FST revealed many more genomic regions, providing empirical data in support of published simulation-based results concerning elevated type I error associated with FST when it is being used to characterize sweep regions. The findings, while conducted using sheep SNP data, are likely to be applicable across those domestic animal species that have undergone artificial selection for desirable phenotypic traits.

2018 ◽  
Vol 93 (5) ◽  
pp. 223-244 ◽  
Author(s):  
Ryan D. Guggenmos ◽  
M. David Piercey ◽  
Christopher P. Agoglia

ABSTRACT Contrast analysis has become prevalent in experimental accounting research since Buckless and Ravenscroft (1990) introduced it to the accounting literature over 25 years ago. Since its initial introduction, the scope of contrast testing has expanded, yet guidance as to the most appropriate methods of specifying, conducting, interpreting, and exhibiting these tests has not. We survey the use of contrast analysis in the recent literature and propose a three-part testing approach that provides a more comprehensive picture of contrast results. Our approach considers three pieces of complementary evidence: the visual evaluation of fit, traditional significance testing, and quantitative evaluation of the contrast variance residual. Our measure of the contrast variance residual, q2, is proposed for the first time in this work. After proposing our approach, we walk through six common contrast testing scenarios where current practices may fall short and our approach may guide researchers. We extend Buckless and Ravenscroft (1990) and contribute to the accounting research methods literature by documenting current contrast analysis practices that result in elevated Type I error and by proposing a potential solution to mitigate these concerns.


2017 ◽  
Author(s):  
Rasool Tahmasbi ◽  
Luke M. Evans ◽  
Eric Turkheimer ◽  
Matthew C. Keller

The environment can moderate the effect of genes – a phenomenon called gene-environment (GxE) interaction. There are two broad types of GxE modeled in human behavior – qualitative GxE, where the effects of individual genetic variants differ depending on some environmental moderator, and quantitative GxE, where the additive genetic variance changes as a function of an environmental moderator. Tests of both qualitative and quantitative GxE have traditionally relied on comparing the covariances between twins and close relatives, but recently there has been interest in testing such models on unrelated individuals measured on genomewide data. However, to date, there has been no ability to test quantitative GxE effects in unrelated individuals using genomewide data because standard software cannot solve nonlinear constraints. Here, we introduce a maximum likelihood approach with parallel constrained optimization to fit such models. We use simulation to estimate the accuracy, power, and type I error rates of our method and to gauge its computational performance, and then apply this method to IQ data measured on 40,172 individuals with whole-genome SNP data from the UK Biobank. We found that the additive genetic variation of IQ tagged by SNPs increases as socioeconomic status (SES) decreases, opposite the direction found by several twin studies conducted in the U.S. on adolescents, but consistent with several studies from Europe and Australia on adults.


Author(s):  
Damien R. Farine ◽  
Gerald G. Carter

ABSTRACTGenerating insights about a null hypothesis requires not only a good dataset, but also statistical tests that are reliable and actually address the null hypothesis of interest. Recent studies have found that permutation tests, which are widely used to test hypotheses when working with animal social network data, can suffer from high rates of type I error (false positives) and type II error (false negatives).Here, we first outline why pre-network and node permutation tests have elevated type I and II error rates. We then propose a new procedure, the double permutation test, that addresses some of the limitations of existing approaches by combining pre-network and node permutations.We conduct a range of simulations, allowing us to estimate error rates under different scenarios, including errors caused by confounding effects of social or non-social structure in the raw data.We show that double permutation tests avoid elevated type I errors, while remaining sufficiently sensitive to avoid elevated type II errors. By contrast, the existing solutions we tested, including node permutations, pre-network permutations, and regression models with control variables, all exhibit elevated errors under at least one set of simulated conditions. Type I error rates from double permutation remain close to 5% in the same scenarios where type I error rates from pre-network permutation tests exceed 30%.The double permutation test provides a potential solution to issues arising from elevated type I and type II error rates when testing hypotheses with social network data. We also discuss other approaches, including restricted node permutations, testing multiple null hypotheses, and splitting large datasets to generate replicated networks, that can strengthen our ability to make robust inferences. Finally, we highlight ways that uncertainty can be explicitly considered during the analysis using permutation-based or Bayesian methods.


2015 ◽  
Vol 03 (03) ◽  
pp. 87-101 ◽  
Author(s):  
Elif Tuğran ◽  
Mehmet Kocak ◽  
Hamit Mirtagioğlu ◽  
Soner Yiğit ◽  
Mehmet Mendes

2000 ◽  
Vol 14 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Joni Kettunen ◽  
Niklas Ravaja ◽  
Liisa Keltikangas-Järvinen

Abstract We examined the use of smoothing to enhance the detection of response coupling from the activity of different response systems. Three different types of moving average smoothers were applied to both simulated interbeat interval (IBI) and electrodermal activity (EDA) time series and to empirical IBI, EDA, and facial electromyography time series. The results indicated that progressive smoothing increased the efficiency of the detection of response coupling but did not increase the probability of Type I error. The power of the smoothing methods depended on the response characteristics. The benefits and use of the smoothing methods to extract information from psychophysiological time series are discussed.


Methodology ◽  
2012 ◽  
Vol 8 (1) ◽  
pp. 23-38 ◽  
Author(s):  
Manuel C. Voelkle ◽  
Patrick E. McKnight

The use of latent curve models (LCMs) has increased almost exponentially during the last decade. Oftentimes, researchers regard LCM as a “new” method to analyze change with little attention paid to the fact that the technique was originally introduced as an “alternative to standard repeated measures ANOVA and first-order auto-regressive methods” (Meredith & Tisak, 1990, p. 107). In the first part of the paper, this close relationship is reviewed, and it is demonstrated how “traditional” methods, such as the repeated measures ANOVA, and MANOVA, can be formulated as LCMs. Given that latent curve modeling is essentially a large-sample technique, compared to “traditional” finite-sample approaches, the second part of the paper addresses the question to what degree the more flexible LCMs can actually replace some of the older tests by means of a Monte-Carlo simulation. In addition, a structural equation modeling alternative to Mauchly’s (1940) test of sphericity is explored. Although “traditional” methods may be expressed as special cases of more general LCMs, we found the equivalence holds only asymptotically. For practical purposes, however, no approach always outperformed the other alternatives in terms of power and type I error, so the best method to be used depends on the situation. We provide detailed recommendations of when to use which method.


Methodology ◽  
2015 ◽  
Vol 11 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Jochen Ranger ◽  
Jörg-Tobias Kuhn

In this manuscript, a new approach to the analysis of person fit is presented that is based on the information matrix test of White (1982) . This test can be interpreted as a test of trait stability during the measurement situation. The test follows approximately a χ2-distribution. In small samples, the approximation can be improved by a higher-order expansion. The performance of the test is explored in a simulation study. This simulation study suggests that the test adheres to the nominal Type-I error rate well, although it tends to be conservative in very short scales. The power of the test is compared to the power of four alternative tests of person fit. This comparison corroborates that the power of the information matrix test is similar to the power of the alternative tests. Advantages and areas of application of the information matrix test are discussed.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


Sign in / Sign up

Export Citation Format

Share Document