scholarly journals Statistical Assumptions as Empirical Commitments

Author(s):  
Richard A. Berk ◽  
David A. Freedman
2000 ◽  
Author(s):  
Charles Annis

Abstract The idea of probabilistic engineering analysis — replacing fixed values for important parameters with their probability densities — seems almost self-evident. With the arrival of increasingly inexpensive computing capacity the past decade has witnessed a great proliferation of such analysis. Regrettably, however, much of this is unwittingly constructed on untenable statistical premises. Engineers, sure of their subsequent logic, can arrive at grossly erroneous conclusions, not always because their engineering is flawed, but because they relied on statistical assumptions that do not hold in their situation, or overlooked statistical results that do.


2021 ◽  
Vol 43 ◽  
pp. e53721
Author(s):  
Luiz Rafael Clóvis ◽  
Ronald José Barth Pinto ◽  
Renan Santos Uhdre ◽  
Jocimar Costa Rosa ◽  
Hugo Zeni Neto ◽  
...  

The objective of this study was to conduct a meta-analysis and test its efficiency in summarizing the heterogeneous data of heritability estimates for the traits of grain yield (GY) and popping expansion (PE), and to provide reliable estimates of selection gains in popcorn. Therefore, 97 heritability estimates ( ) for popcorn GY and PE in the broad and narrow sense were used. The main procedures underlying the estimation of the combined heritability ( ) using the technique of meta-analysis consisted of i) an exploratory analysis of the set of heritability estimates to detect outliers using a box-plot chart, ii) the verification of the required statistical assumptions, iii) testing the involved heritability estimates for homogeneity, and iv) the calculation of the estimates of combined heritability. The meta-analysis facilitated the synthesis of the information pertaining to heritability in popcorn. The combined heritability estimates ( ) in the broad sense for GY and PE were 0.5208 ± 0.0229 and 0.6356 ± 0.0209, respectively, and in the narrow sense were 0.3290 ± 0.0292 and 0.3083 ± 0.0298, respectively.


2019 ◽  
Author(s):  
Matt Williams

Most researchers and students in psychology learn of S. S. Stevens’ scales or “levels” of measurement (nominal, ordinal, interval, and ratio), and of his rules setting out which statistical analyses are inadmissible with each measurement level. Many are nevertheless left confused about the basis of these rules, and whether they should be rigidly followed. In this article, I attempt to provide an accessible explanation of the measurement-theoretic concerns that led Stevens to argue that certain types of analyses are inappropriate with particular levels of measurement. I explain how these measurement-theoretic concerns are distinct from the statistical assumptions underlying data analyses, which rarely include assumptions about levels of measurement. The level of measurement of observations can nevertheless have important implications for statistical assumptions. I conclude that researchers may find it more useful to critically investigate the plausibility of the statistical assumptions underlying analyses rather than limiting themselves to the set of analyses that Stevens believed to be admissible with data of a given level of measurement.


1974 ◽  
Vol 11 (11) ◽  
pp. 1616-1619 ◽  
Author(s):  
Erwin Zodrow

Given a matrix of product-moment correlation coefficients computed on 'closed data' (a system of percentage variables), the test of departure from zero correlation is clearly inappropriate because closure of data imposes non-zero correlation.Subroutine CHAYES estimates 'Null Correlations' and calculates a t-matrix which may be used to test the hypothesis that observed product-moment correlations are due to the closure property. The calculated t-matrix is at best an approximation. This is found in certain mathematical assumptions leading to this t-test, and the recognition that statistical assumptions in the theory of t-testing are not satisfied. Great care must be exercized in accepting or rejecting correlations by this test. Therefore, results of the t-test are no substitute for sound geological reasoning, caveat emptor.


Author(s):  
Nicholas J. Ashill

Over the past 15 years, the use of Partial Least Squares (PLS) in academic research has enjoyed increasing popularity in many social sciences including Information Systems, marketing, and organizational behavior. PLS can be considered an alternative to covariance-based SEM and has greater flexibility in handling various modeling problems in situations where it is difficult to meet the hard assumptions of more traditional multivariate statistics. This chapter focuses on PLS for beginners. Several topics are covered and include foundational concepts in SEM, the statistical assumptions of PLS, a LISREL-PLS comparison and reflective and formative measurement.


Author(s):  
Vasileios Charisopoulos ◽  
Damek Davis ◽  
Mateo Díaz ◽  
Dmitriy Drusvyatskiy

Abstract We consider the task of recovering a pair of vectors from a set of rank one bilinear measurements, possibly corrupted by noise. Most notably, the problem of robust blind deconvolution can be modeled in this way. We consider a natural nonsmooth formulation of the rank one bilinear sensing problem and show that its moduli of weak convexity, sharpness and Lipschitz continuity are all dimension independent, under favorable statistical assumptions. This phenomenon persists even when up to half of the measurements are corrupted by noise. Consequently, standard algorithms, such as the subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within a constant relative error of the solution. We complete the paper with a new initialization strategy, complementing the local search algorithms. The initialization procedure is both provably efficient and robust to outlying measurements. Numerical experiments, on both simulated and real data, illustrate the developed theory and methods.


Sign in / Sign up

Export Citation Format

Share Document