Non-random errors and their importance in testing of hypotheses

2021 ◽  
Vol 66 (3) ◽  
pp. 7-21
Author(s):  
Mirosław Szreder

Increasing numbers of non-random errors are observed in contemporary sample surveying – in particular, those resulting from no response or faulty measutrements (imprecise statistical observation). Until recently, the consequences of these kinds of errors have not been widely discussed in the context of the testing of hypoteses. Researchers focused almost entirely on sampling errors (random errors), whose magnitude decreases as the size of the random sample grows. In consequence, researchers who often use samples of very large sizes tend to overlook the influence random and non-random errors have on the results of their study. The aim of this paper is to present how non-random errors can affect the decision-making process based on the classical hypothesis testing procedure. Particular attention is devoted to cases in which researchers manage samples of large sizes. The study proved the thesis that samples of large sizes cause statistical tests to be more sensitive to non-random errors. Systematic errors, as a special case of non-random errors, increase the probability of making the wrong decision to reject a true hypothesis as the sample size grows. Supplementing the testing of hypotheses with the analysis of confidence intervals may in this context provide substantive support for the researcher in drawing accurate inferences.

2003 ◽  
Vol 11 (4) ◽  
pp. 381-396 ◽  
Author(s):  
Joshua D. Clinton ◽  
Adam Meirowitz

Scholars of legislative studies typically use ideal point estimates from scaling procedures to test theories of legislative politics. We contend that theory and methods may be better integrated by directly incorporating maintained and to be tested hypotheses in the statistical model used to estimate legislator preferences. In this view of theory and estimation, formal modeling (1) provides auxiliary assumptions that serve as constraints in the estimation process, and (2) generates testable predictions. The estimation and hypothesis testing procedure uses roll call data to evaluate the validity of theoretically derived to be tested hypotheses in a world where maintained hypotheses are presumed true. We articulate the approach using the language of statistical inference (both frequentist and Bayesian). The approach is demonstrated in analyses of the well-studied Powell amendment to the federal aid-to-education bill in the 84th House and the Compromise of 1790 in the 1st House.


MAKILA ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 14-28
Author(s):  
Sitna Marasabessy ◽  
Bokiraiya Latuamury ◽  
Iskar Iskar ◽  
Christy C.V. Suhendy

Green open space is at least a minimum requirement for an environmentally sustainable city at 30% of the total area. Pressure on green free space, especially the Green belt area in the river border, tends to increase from year to year due to an increase in urban population. Therefore, this study aims to analyze people's perceptions of the green belt vegetation's role in the watershed of the Wae Batu Gajah watershed in Ambon City. The research method uses descriptive methods that describe a situation based on facts in the field and do not treat the object, with the hypothesis testing procedure using Chi-Square. The results showed that the community's socio-economic parameters consisting of age, formal education, and occupation had a significant influence on the understanding of the green border of the river. In contrast, gender and marital status parameters have no significant effect on understanding the green belt border. Formal education can influence attitudes and behavior through values, character, and understanding of a problem built in stages in a person. The type of work a person has for a long time working will affect the environment's mindset and behavior. The poor only have two sources of income, through salaries / informal business surpluses for basic needs.


2009 ◽  
pp. 154-172
Author(s):  
George H. Weinberg ◽  
John A. Schumaker

1990 ◽  
Vol 47 (5) ◽  
pp. 948-959 ◽  
Author(s):  
Laura J. Richards ◽  
Jon T. Schnute ◽  
Claudia M. Hand

In this paper we develop a multivariate analysis of length and age at maturity that includes the univariate maturity model of Schnute and Richards (1990, Can. J. Fish. Aquat. Sci. 47: 24–40) as a special case. In addition, we address the problem of drawing meaningful conclusions from large data sets oriented to fish maturity, and we present statistical tests of such conclusions. We illustrate our approach with comparisons among male and female lingcod (Ophiodon elongatus) from three stocks along the coast of British Columbia, Canada. From the univariate analysis, we demonstrate that male lingcod mature at a smaller size than female lingcod, and that for each sex, size at maturity increases with latitude. From the multivariate analysis, we determine that length and age together provide a better prediction of lingcod maturation than either variate considered alone. The multivariate model is applicable to any situation for which one or more positive variates is asymptotically related to a probability measure in the range (0, 1).


2005 ◽  
Vol 2 ◽  
pp. 151-155 ◽  
Author(s):  
F. Piccolo ◽  
G. B. Chirico

Abstract. Radar rainfall data are affected by several types of error. Beside the error in the measurement of the rainfall reflectivity and its transformation into rainfall intensity, random errors can be generated by the temporal spacing of the radar scans. The aim of this work is to analize the sensitivity of the estimated rainfall maps to the radar sampling interval, i.e. the time interval between two consecutive radar scans. This analysis has been performed employing data collected with a polarimetric C-band radar in Rome, Italy. The radar data consist of reflectivity maps with a sampling interval of 1min and a spatial resolution of 300m, covering an area of 1296km2. The transformation of the reflectivity maps in rainfall fields has been validated against rainfall data collected by a network of 14 raingauges distributed across the study area. Accumulated rainfall maps have been calculated for different spatial resolutions (from 300m to 2400m) and different sampling intervals (from 1min to 16min). The observed differences between the estimated rainfall maps are significant, showing that the sampling interval can be an important source of error in radar rainfall measurements.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Ching-Wen Hong ◽  
Wen-Chuan Lee ◽  
Jong-Wuu Wu

Process capability analysis has been widely applied in the field of quality control to monitor the performance of industrial processes. In practice, lifetime performance indexCLis a popular means to assess the performance and potential of their processes, whereLis the lower specification limit. This study will apply the large-sample theory to construct a maximum likelihood estimator (MLE) ofCLwith the progressive first-failure-censored sampling plan under the Weibull distribution. The MLE ofCLis then utilized to develop a new hypothesis testing procedure in the condition of knownL.


Author(s):  
David J. Cochran ◽  
Jerry D. Gibson

The polarity coincidence correlator (PCC), a nonparametric hypothesis testing procedure, is presented as an alternative to the Pearson product-moment correlation coefficient (r). Previous theoretical results indicate that the PCC statistic should be inferior to the correlation coefficient when the data are normal, but it may outperform r for non-normal data with a highly peaked probability density function. Application of the PCC to a human factors experiment is illustrated. For this particular application, the PCC performance compares favorably to the product-moment correlation test when the data are normal and exceeds that of the correlation test for non-normal data. The results, combined with the ease of PCC computation, support the PCC as a promising test for human factors experimentation.


Sign in / Sign up

Export Citation Format

Share Document