scholarly journals Linkage Disequilibrium in Domestic Sheep

Genetics ◽  
2002 ◽  
Vol 160 (3) ◽  
pp. 1113-1122
Author(s):  
A F McRae ◽  
J C McEwan ◽  
K G Dodds ◽  
T Wilson ◽  
A M Crawford ◽  
...  

Abstract The last decade has seen a dramatic increase in the number of livestock QTL mapping studies. The next challenge awaiting livestock geneticists is to determine the actual genes responsible for variation of economically important traits. With the advent of high density single nucleotide polymorphism (SNP) maps, it may be possible to fine map genes by exploiting linkage disequilibrium between genes of interest and adjacent markers. However, the extent of linkage disequilibrium (LD) is generally unknown for livestock populations. In this article microsatellite genotype data are used to assess the extent of LD in two populations of domestic sheep. High levels of LD were found to extend for tens of centimorgans and declined as a function of marker distance. However, LD was also frequently observed between unlinked markers. The prospects for LD mapping in livestock appear encouraging provided that type I error can be minimized. Properties of the multiallelic LD coefficient D′ were also explored. D′ was found to be significantly related to marker heterozygosity, although the relationship did not appear to unduly influence the overall conclusions. Of potentially greater concern was the observation that D′ may be skewed when rare alleles are present. It is recommended that the statistical significance of LD is used in conjunction with coefficients such as D′ to determine the true extent of LD.

Salud Mental ◽  
2021 ◽  
Vol 44 (6) ◽  
pp. 277-285
Author(s):  
Ricardo Ignacio Audiffred Jaramillo ◽  
Javier Eduardo García de Alba García ◽  
Ivonne García Monzón ◽  
Carlos Isaac Loza Salazar ◽  
Leticia Limón Cervantes

Introduction. Schizophrenia is a mental disorder that affects 21 million people worldwide, and it brings about environments with high Expressed Emotion (EE) in the families of these individuals. High EE is characterized by negative evaluations, criticism, and overinvolvement of family members towards the person with schizophrenia. Objective. Recognize the relationship between the cultural agreement about the symptoms of schizophrenia and EE. Method. The study had a mixed design, with a cognitive anthropology method. The sample had a probabilistic estimate based on Weller and Romney proposal, with a competence higher than 50%, a confidence level of 95%, and 5% of type I error. The 40 participants were selected with a simple randomized probability sampling, conformed by relatives of patients from the Instituto Jalisciense de Salud Mental (SALME). Results. The 45% of the sample showed a high EE according to the Questionnaire for Measuring the Level of Expressed Emotion (CEEE). A single valid cultural model with statistical significance was found, in which violence was identified as the main symptom of schizophrenia. The best-informed relatives showed a lower EE (Mann-Whitney U = 1,000, p #abr# .001). Discussion and conclusion. Schizophrenia disorder has been associated with stigmas that generate rejection and fear. A total of 40% of the world’s population believe people with schizophrenia are dangerous and violent. It must be considered opportune to reconsider the use of the term “schizophrenia”, which is etymologically imprecise and supports stigmas that have excluded and defamed people with schizophrenia for more than a century.


2021 ◽  
pp. 1-11
Author(s):  
Valentina Escott-Price ◽  
Karl Michael Schmidt

<b><i>Background:</i></b> Genome-wide association studies (GWAS) were successful in identifying SNPs showing association with disease, but their individual effect sizes are small and require large sample sizes to achieve statistical significance. Methods of post-GWAS analysis, including gene-based, gene-set and polygenic risk scores, combine the SNP effect sizes in an attempt to boost the power of the analyses. To avoid giving undue weight to SNPs in linkage disequilibrium (LD), the LD needs to be taken into account in these analyses. <b><i>Objectives:</i></b> We review methods that attempt to adjust the effect sizes (β<i>-</i>coefficients) of summary statistics, instead of simple LD pruning. <b><i>Methods:</i></b> We subject LD adjustment approaches to a mathematical analysis, recognising Tikhonov regularisation as a framework for comparison. <b><i>Results:</i></b> Observing the similarity of the processes involved with the more straightforward Tikhonov-regularised ordinary least squares estimate for multivariate regression coefficients, we note that current methods based on a Bayesian model for the effect sizes effectively provide an implicit choice of the regularisation parameter, which is convenient, but at the price of reduced transparency and, especially in smaller LD blocks, a risk of incomplete LD correction. <b><i>Conclusions:</i></b> There is no simple answer to the question which method is best, but where interpretability of the LD adjustment is essential, as in research aiming at identifying the genomic aetiology of disorders, our study suggests that a more direct choice of mild regularisation in the correction of effect sizes may be preferable.


2021 ◽  
pp. 121-142
Author(s):  
Charles Auerbach

This chapter covers tests of statistical significance that can be used to compare data across phases. These are used to determine whether observed outcomes are likely the result of an intervention or, more likely, the result of chance. The purpose of a statistical test is to determine how likely it is that the analyst is making an incorrect decision by rejecting the null hypothesis and accepting the alternative one. A number of tests of significance are presented in this chapter: statistical process control charts (SPCs), proportion/frequency, chi-square, the conservative dual criteria (CDC), robust conservative dual criteria (RCDC), the t test, and analysis of variance (ANOVA). How and when to use each of these are also discussed. The method for transforming autocorrelated data and merging data sets is discussed. Once new data sets are created using the Append() function, they can be tested for Type I error using the techniques discussed in the chapter.


2016 ◽  
Vol 5 (5) ◽  
pp. 16 ◽  
Author(s):  
Guolong Zhao

To evaluate a drug, statistical significance alone is insufficient and clinical significance is also necessary. This paper explains how to analyze clinical data with considering both statistical and clinical significance. The analysis is practiced by combining a confidence interval under null hypothesis with that under non-null hypothesis. The combination conveys one of the four possible results: (i) both significant, (ii) only significant in the former, (iii) only significant in the latter or (iv) neither significant. The four results constitute a quadripartite procedure. Corresponding tests are mentioned for describing Type I error rates and power. The empirical coverage is exhibited by Monte Carlo simulations. In superiority trials, the four results are interpreted as clinical superiority, statistical superiority, non-superiority and indeterminate respectively. The interpretation is opposite in inferiority trials. The combination poses a deflated Type I error rate, a decreased power and an increased sample size. The four results may helpful for a meticulous evaluation of drugs. Of these, non-superiority is another profile of equivalence and so it can also be used to interpret equivalence. This approach may prepare a convenience for interpreting discordant cases. Nevertheless, a larger data set is usually needed. An example is taken from a real trial in naturally acquired influenza.


2018 ◽  
Vol 5 (1) ◽  
pp. 205316801876487
Author(s):  
Lion Behrens ◽  
Ingo Rohlfing

Based on the statistical analysis of an original survey of young party members from six European democracies, a study concluded that three types of young members differed systematically regarding their membership objectives, activism, efficacy and perceptions of the party and self-perceived political future. We performed a technical replication of the original study, correcting four deficiencies, which led us to a different conclusion. First, we discuss substantive significance in addition to statistical significance. Second, we ran significance tests on all comparisons instead of limiting them to an arbitrary subset. Third, we performed pairwise comparisons between the three types of members instead of using pooled groups. Fourth, we avoided the inflation of the type-I error rate due to multiple testing by using the Bonferroni–Holm correction. We found that most of the differences between the types lacked substantive significance, and that statistical significance only coherently distinguished the types of members in their future membership, but not in their present behaviour and attitudes.


1996 ◽  
Vol 1 (1) ◽  
pp. 25-28 ◽  
Author(s):  
Martin A. Weinstock

Background: Accurate understanding of certain basic statistical terms and principles is key to critical appraisal of published literature. Objective: This review describes type I error, type II error, null hypothesis, p value, statistical significance, a, two-tailed and one-tailed tests, effect size, alternate hypothesis, statistical power, β, publication bias, confidence interval, standard error, and standard deviation, while including examples from reports of dermatologic studies. Conclusion: The application of the results of published studies to individual patients should be informed by an understanding of certain basic statistical concepts.


Author(s):  
Abhaya Indrayan

Background: Small P-values have been conventionally considered as evidence to reject a null hypothesis in empirical studies. However, there is widespread criticism of P-values now and the threshold we use for statistical significance is questioned.Methods: This communication is on contrarian view and explains why P-value and its threshold are still useful for ruling out sampling fluctuation as a source of the findings.Results: The problem is not with P-values themselves but it is with their misuse, abuse, and over-use, including the dominant role they have assumed in empirical results. False results may be mostly because of errors in design, invalid data, inadequate analysis, inappropriate interpretation, accumulation of Type-I error, and selective reporting, and not because of P-values per se.Conclusion: A threshold of P-values such as 0.05 for statistical significance is helpful in making a binary inference for practical application of the result. However, a lower threshold can be suggested to reduce the chance of false results. Also, the emphasis should be on detecting a medically significant effect and not zero effect.


2021 ◽  
pp. 096228022110028
Author(s):  
Zhen Meng ◽  
Qinglong Yang ◽  
Qizhai Li ◽  
Baoxue Zhang

For a nonparametric Behrens-Fisher problem, a directional-sum test is proposed based on division-combination strategy. A one-layer wild bootstrap procedure is given to calculate its statistical significance. We conduct simulation studies with data generated from lognormal, t and Laplace distributions to show that the proposed test can control the type I error rates properly and is more powerful than the existing rank-sum and maximum-type tests under most of the considered scenarios. Applications to the dietary intervention trial further show the performance of the proposed test.


2011 ◽  
Vol 24 (2) ◽  
pp. 91-124 ◽  
Author(s):  
Keiji Uchikawa ◽  
Takahiro Hoshino ◽  
Takehiro Nagai

AbstractThe t-test and the analysis of variance are commonly used as statistical significance testing methods. However, they cannot assess the significance of differences between thresholds within individual observers estimated from the constant stimuli method; these thresholds are not defined as averages of samples, but they are rather defined as functions of parameters of psychometric functions fitted to participants' responses. Moreover, the statistics necessary for these statistical testing methods cannot be derived. In this paper, we propose a new statistical testing method to assess the statistical significance of differences between thresholds estimated from the constant stimuli method. The new method can assess not only threshold differences but also main effects and interactions in multifactor experiments, exploiting the asymptotic normality of maximum likelihood estimators and the characteristics of multivariate normal distributions. This proposed method could be used in similar cases to the analysis of variance for thresholds estimated from the adjustment method and the staircase method. Finally, we present some data on simulations in which we tested assumptions, power and type I error of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document