scholarly journals Tables Describing Small-Sample Properties of the Mean, Median, Standard Deviation and Other Statistics in Sampling from Various Distributions.

1964 ◽  
Vol 127 (1) ◽  
pp. 134
Author(s):  
S. Vajda ◽  
Churchill Eisenhart
1964 ◽  
Vol 18 (85) ◽  
pp. 154
Author(s):  
Julius Lieblein ◽  
Churchill Eisenhart ◽  
Lola S. Deming ◽  
Celia S. Martin

Author(s):  
Zhigang Wei ◽  
Limin Luo ◽  
Fulun Yang ◽  
Robert Rebandt

Fatigue design curve construction is commonly used for durability and reliability assessment of engineering components subjected to cyclic loading. A wide variety of design curve construction methods have been developed over the last decades. Some of the methods have been adopted by engineering codes and widely used in industry. However, the traditional design curve construction methods usually require significant amounts of test data in order for the constructed design curves to be consistently and reliably used in product design and validation. In order to reduce the test sample size and associated testing time and cost, several Bayesian statistics based design curve construction methods have been recently successfully developed by several research groups. Among all of these methods, an efficient Monte Carlo simulation based resampling method developed by the authors of this paper is of particular importance. The method is based on a large amount of reliable historical fatigue test data, the associated probabilistic distributions of the mean and standard deviation of the failure cycles, and an advanced acceptance-rejection resampling algorithm. However, finite element analysis (FEA) methods and a special stress recovery technique are required to process the test data, which is usually a time-consuming process. A more straightforward approach that does not require these intermediate processes is strongly preferred. This study presents such an approach, in which the only historical information needed is the distribution of the standard deviation of the cycles to failure. The distribution of the mean is directly calculated from the current tested data and the Central Limit Theorem. Neither FEA nor stress recovery technique is required for this approach, and the effort put into design curve construction can be significantly reduced. This method can be used to complement the previously developed Bayesian methods.


1969 ◽  
Vol 15 (1) ◽  
pp. 72-83 ◽  
Author(s):  
William A Groff ◽  
Robert I Ellin

Abstract A rapid and accurate method for analyzing pyridinium oximes—N-methylpyridinium-2-aldoxime chloride, N,N'-trimethylene-(pyridinium-4-aldoxime) dihalide, and N,N'-oxydimethyl-(pyridinium-4-aldoxime) dichloride—in plasma, urine, and whole blood is described. The method is completely automated and requires small sample volumes. Concentrations ranging from 3 to 120 µEq./L. in biologic fluids can be determined at a rate of 40 samples per hour. This technic can be applied to oximes which are unstable in basic solution. The average variation of the oxime concentration used to establish calibration curves, as determined by the ratio of the standard deviation to the mean, was ± 1.5%. Plasma and albumin increase the transfer rate of the oximes through the dialyzing membrane. Theoretical concentrations to explain this phenomenon are presented.


1996 ◽  
Vol 21 (4) ◽  
pp. 299-332 ◽  
Author(s):  
Larry V. Hedges ◽  
Jack L. Vevea

When there is publication bias, studies yielding large p values, and hence small effect estimates, are less likely to be published, which leads to biased estimates of effects in meta-analysis. We investigate a selection model based on one-tailed p values in the context of a random effects model. The procedure both models the selection process and corrects for the consequences of selection on estimates of the mean and variance of effect parameters. A test of the statistical significance of selection is also provided. The small sample properties of the method are evaluated by means of simulations, and the asymptotic theory is found to be reasonably accurate under correct model specification and plausible conditions. The method substantially reduces bias due to selection when model specification is correct, but the variance of estimates is increased; thus mean squared error is reduced only when selection produces substantial bias. The robustness of the method to violations of assumptions about the form of the distribution of the random effects is also investigated via simulation, and the model-corrected estimates of the mean effect are generally found to be much less biased than the uncorrected estimates. The significance test for selection bias, however, is found to be highly nonrobust, rejecting at up to 10 times the nominal rate when there is no selection but the distribution of the effects is incorrectly specified.


2009 ◽  
Vol 59 (5) ◽  
Author(s):  
Viktor Witkovský ◽  
Gejza Wimmer

AbstractWe consider the problem of making statistical inference about the mean of a normal distribution based on a random sample of quantized (digitized) observations. This problem arises, for example, in a measurement process with errors drawn from a normal distribution and with a measurement device or process with a known resolution, such as the resolution of an analog-to-digital converter or another digital instrument. In this paper we investigate the effect of quantization on subsequent statistical inference about the true mean. If the standard deviation of the measurement error is large with respect to the resolution of the indicating measurement device, the effect of quantization (digitization) diminishes and standard statistical inference is still valid. Hence, in this paper we consider situations where the standard deviation of the measurement error is relatively small. By Monte Carlo simulations we compare small sample properties of the interval estimators of the mean based on standard approach (i.e. by ignoring the fact that the measurements have been quantized) with some recently suggested methods, including the interval estimators based on maximum likelihood approach and the fiducial approach. The paper extends the original study by Hannig et al. (2007).


1969 ◽  
Vol 14 (9) ◽  
pp. 470-471
Author(s):  
M. DAVID MERRILL
Keyword(s):  

1972 ◽  
Vol 28 (03) ◽  
pp. 447-456 ◽  
Author(s):  
E. A Murphy ◽  
M. E Francis ◽  
J. F Mustard

SummaryThe characteristics of experimental error in measurement of platelet radioactivity have been explored by blind replicate determinations on specimens taken on several days on each of three Walker hounds.Analysis suggests that it is not unreasonable to suppose that error for each sample is normally distributed ; and while there is evidence that the variance is heterogeneous, no systematic relationship has been discovered between the mean and the standard deviation of the determinations on individual samples. Thus, since it would be impracticable for investigators to do replicate determinations as a routine, no improvement over simple unweighted least squares estimation on untransformed data suggests itself.


Sign in / Sign up

Export Citation Format

Share Document