variance structure
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 0)

H-INDEX

11
(FIVE YEARS 0)

2020 ◽  
Vol 42 ◽  
pp. e44456
Author(s):  
Isabella Marianne Costa Campos ◽  
Denismar Alves Nogueira ◽  
Eric Batista Ferreira ◽  
Davi Butturi-Gomes

In some studies, there is interest in testing the variance structure, as in the context of multivariate or modelling techniques. Therefore, the importance of using hypothesis tests on covariance structures is emphasized. The purpose of this study was to perform a detailed performance study regarding the power and type I error rate of some existing identity and sphericity tests, considering the scenarios with different numbers of variables (2 to 64) and sample sizes (5 to 100). The proposal of Ledoit and Wolf (2002) is the most appropriate to test the identity structure. For the sphericity test, the version of John (1972), modified by Ledoit and Wolf (2002), followed by the proposal of Box (1949), were the ones with the best performance.



2020 ◽  
Vol 2 (1) ◽  
Author(s):  
N S Vitek ◽  
C C Roseman ◽  
J I Bloch

Synopsis Mammalian molar crowns form a module in which measurements of size for individual teeth within a tooth row covary with one another. Molar crown size covariation is proposed to fit the inhibitory cascade model (ICM) or its variant the molar module component (MMC) model, but the inability of the former model to fit across biological scales is a concern in the few cases where it has been tested in Primates. The ICM has thus far failed to explain patterns of intraspecific variation, an intermediate biological scale, even though it explains patterns at both smaller organ-level and larger between-species biological scales. Studies of this topic in a much broader range of taxa are needed, but the properties of a sample appropriate for testing the ICM at the intraspecific level are unclear. Here, we assess intraspecific variation in relative molar sizes of the cotton mouse, Peromyscus gossypinus, to further test the ICM and to develop recommendations for appropriate sampling protocols in future intraspecific studies of molar size variation across Mammalia. To develop these recommendations, we model the sensitivity of estimates of molar ratios to sample size and simulate the use of composite molar rows when complete ones are unavailable. Similar to past studies on primates, our results show that intraspecific variance structure of molar ratios within the rodent P. gossypinus does not meet predictions of the ICM or MMC. When we extend these analyses to include the MMC, one model does not fit observed patterns of variation better than the other. Standing variation in molar size ratios is relatively constant across mammalian samples containing all three molars. In future studies, analyzing average ratio values will require relatively small minimum sample sizes of two or more complete molar rows. Even composite-based estimates from four or more specimens per tooth position can accurately estimate mean molar ratios. Analyzing variance structure will require relatively large sample sizes of at least 40–50 complete specimens, and composite molar rows cannot accurately reconstruct variance structure of ratios in a sample. Based on these results, we propose guidelines for intraspecific studies of molar size covariation. In particular, we note that the suitability of composite specimens for averaging mean molar ratios is promising for the inclusion of isolated molars and incomplete molar rows from the fossil record in future studies of the evolution of molar modules, as long as variance structure is not a key component of such studies.



Heredity ◽  
2019 ◽  
Vol 124 (2) ◽  
pp. 274-287 ◽  
Author(s):  
Emre Karaman ◽  
Mogens S. Lund ◽  
Guosheng Su

Abstract Widely used genomic prediction models may not properly account for heterogeneous (co)variance structure across the genome. Models such as BayesA and BayesB assume locus-specific variance, which are highly influenced by the prior for (co)variance of single nucleotide polymorphism (SNP) effect, regardless of the size of data. Models such as BayesC or GBLUP assume a common (co)variance for a proportion (BayesC) or all (GBLUP) of the SNP effects. In this study, we propose a multi-trait Bayesian whole genome regression method (BayesN0), which is based on grouping a number of predefined SNPs to account for heterogeneous (co)variance structure across the genome. This model was also implemented in single-step Bayesian regression (ssBayesN0). For practical implementation, we considered multi-trait single-step SNPBLUP models, using (co)variance estimates from BayesN0 or ssBayesN0. Genotype data were simulated using haplotypes on first five chromosomes of 2200 Danish Holstein cattle, and phenotypes were simulated for two traits with heritabilities 0.1 or 0.4, assuming 200 quantitative trait loci (QTL). We compared prediction accuracy from different prediction models and different region sizes (one SNP, 100 SNPs, one chromosome or whole genome). In general, highest accuracies were obtained when 100 adjacent SNPs were grouped together. The ssBayesN0 improved accuracies over BayesN0, and using (co)variance estimates from ssBayesN0 generally yielded higher accuracies than using (co)variance estimates from BayesN0, for the 100 SNPs region size. Our results suggest that it could be a good strategy to estimate (co)variance components from ssBayesN0, and then to use those estimates in genomic prediction using multi-trait single-step SNPBLUP, in routine genomic evaluations.



2019 ◽  
Author(s):  
Sujit Sekhar Maharana

Due to the growing importance of Antagonism as the dark core, we examined facet level associations between the antagonistic facets of deceitfulness, manipulation, and grandiosity with the dark triad. A sample pool of 270 prospective managers (Mage = 25.7 yrs., SDage = 3.2 years) from a leading business school of India was selected for the study. It was hypothesised that the facets of antagonism possess a shared variance structure among them (hypothesis 1), machiavellianism will be significantly explained by deceitfulness and manipulation (hypothesis 2), psychopathy will be significantly explained by deceitfulness and manipulation (hypothesis 3), and narcissism will be significantly explained by grandiosity and manipulation (hypothesis 4). Complete support was found for all the hypotheses except hypothesis 1, which received partial support. It was concluded that while each of the antagonistic facets have their unique role to play in individually they can’t account for the dark core.



2019 ◽  
Vol 16 (6) ◽  
pp. 616-625 ◽  
Author(s):  
Xiaodong Luo ◽  
Bo Huang ◽  
Hui Quan

Background/Aims: Restricted mean survival time has become a popular treatment effect measurement because of its nice interpretability. However, study design based on restricted mean survival times often requires extensive simulation studies as the variance structure is hard to obtain analytically. This article aims to provide a flexible approach to conduct study design and monitoring based on the restricted mean survival times without resorting to simulation. Methods: We assume that both the event time and censoring time distributions are piecewise exponential, and the accrual distribution is piecewise uniform, with which the restricted mean survival times and their variance–covariance structure can be conveniently computed. Results: Since we allow arbitrary number of pieces in the piecewise exponential and uniform distributions, the resulting model can handle a wide range of scenarios. The usefulness of the approach is demonstrated via an example. Conclusion: The proposed approach is flexible and useful in the design and monitoring of survival trials based on restricted mean survival times.



2019 ◽  
Vol 43 (7) ◽  
pp. 815-830 ◽  
Author(s):  
Wei Q. Deng ◽  
Shihong Mao ◽  
Anette Kalnapenkis ◽  
Tõnu Esko ◽  
Reedik Mägi ◽  
...  


2019 ◽  
Author(s):  
Kiyofumi Miyoshi ◽  
Hakwan Lau

Psychophysical studies on confidence construction are often grounded in bidimensional signal detection theory (SDT) and its relatives. However, these studies often stand on oversimplified assumptions of (1) bidimensional variance-equality and (2) bidimensional statistical independence. The present study simulated two-alternative forced-choice and confidence rating performances, incorporating more empirically plausible variance-covariance structures. One prominent observation is that superior metacognitive accuracy can be achieved when one applies a heuristic in which the response-incongruent dimension of information is ignored. This is because such heuristic takes advantage of the specific unequal-variance structure, which paradoxically cannot be easily exploited if both dimensions are evaluated together. Furthermore, under a variety of internal statistical structures, this simple heuristic predicts dissociations of objective decision and subjective metacognition, which have been empirically observed. Also, it provides a tentative account of some behavioral features of blindsight. Therefore, this surprisingly simple decision heuristic may inspire novel perspectives on metacognition and consciousness.





2019 ◽  
Author(s):  
Yun Zhang ◽  
Gautam Bandyopadhyay ◽  
David J. Topham ◽  
Ann R. Falsey ◽  
Xing Qiu

AbstractBackgroundFor many practical hypothesis testing (H-T) applications, the data are correlated and/or with heterogeneous variance structure. The regressiont-test for weighted linear mixed-effects regression (LMER) is a legitimate choice because it accounts for complex covariance structure; however, high computational costs and occasional convergence issues make it impractical for analyzing high-throughput data. In this paper, we propose computationally efficient parametric and semiparametric tests based on a set of specialized matrix techniques dubbed as the PB-transformation. The PB-transformation has two advantages: 1. The PB-transformed data will have a scalar variance-covariance matrix. 2. The original H-T problem will be reduced to an equivalent one-sample H-T problem. The transformed problem can then be approached by either the one-sample Studentst-test or Wilcoxon signed rank test.ResultsIn simulation studies, the proposed methods outperform commonly used alternative methods under both normal and double exponential distributions. In particular, the PB-transformedt-test produces notably better results than the weighted LMER test, especially in the high correlation case, using only a small fraction of computational cost (3 versus 933 seconds). We apply these two methods to a set of RNA-seq gene expression data collected in a breast cancer study. Pathway analyses show that the PB-transformedt-test reveals more biologically relevant findings in relation to breast cancer than the weighted LMER test․.ConclusionsAs fast and numerically stable replacements for the weighted LMER test, the PB-transformed tests are especially suitable for “messy” high-throughput data that include both independent and matched/repeated samples. By using our method, the practitioners no longer have to choose between using partial data (applying paired tests to only the matched samples) or ignoring the correlation in the data (applying two sample tests to data with some correlated samples).



2018 ◽  
Author(s):  
Shaila Musharoff ◽  
Danny Park ◽  
Andy Dahl ◽  
Joshua Galanter ◽  
Xuanyao Liu ◽  
...  

AbstractIdentifying the genetic and environmental factors underlying phenotypic differences between populations is fundamental to multiple research communities. To date, studies have focused on the relationship between population and phenotypic mean. Here we consider the relationship between population and phenotypic variance, i.e., “population variance structure.” In addition to gene-gene and gene-environment interaction, we show that population variance structure is a direct consequence of natural selection. We develop the ancestry double generalized linear model (ADGLM), a statistical framework to jointly model population mean and variance effects. We apply ADGLM to several deeply phenotyped datasets and observe ancestry-variance associations with 12 of 44 tested traits in ~113K British individuals and 3 of 14 tested traits in ~3K Mexican, Puerto Rican, and African-American individuals. We show through extensive simulations that population variance structure can both bias and reduce the power of genetic association studies, even when principal components or linear mixed models are used. ADGLM corrects this bias and improves power relative to previous methods in both simulated and real datasets. Additionally, ADGLM identifies 17 novel genotype-variance associations across six phenotypes.



Sign in / Sign up

Export Citation Format

Share Document