Test Method with Small Samples of Electro-Explosive Devices Based on Information Equivalence Principle

2011 ◽  
Vol 65 ◽  
pp. 291-294
Author(s):  
Yao Hua Wang ◽  
Liang Wang ◽  
Hai Shan Yang ◽  
Bao Guo Zhu

In order to solve the problem which generally exists in assessing high explosive ignition reliability of electro-explosive devices (EED), a new test method, based on information equivalence principle, is proposed on the condition of a relatively smaller sample size for instead. According to the definition of information principle, the method measures the reliability test information by the negative logarithm of ignition probability of EED and converts the test by GJB376-1987 at a larger amount of stimulation with a big sample size to a small one. We adopt this method to assess the ignition reliability of EED used in the emergency opening system. The result is that we just need 29 sample size on the confidence of not less than 95% and the ignition reliability greater than 0.999. Compared with the 2996 sample size in GJB376-1987, the method reduces the sample usage greatly. Tests shows that the small sample test method based on information equivalence principle for the ignition reliability test of EED is accurate, feasible and can meet the objective of experimental design

2011 ◽  
Vol 6 (2) ◽  
pp. 252-277 ◽  
Author(s):  
Stephen T. Ziliak

AbstractStudent's exacting theory of errors, both random and real, marked a significant advance over ambiguous reports of plant life and fermentation asserted by chemists from Priestley and Lavoisier down to Pasteur and Johannsen, working at the Carlsberg Laboratory. One reason seems to be that William Sealy Gosset (1876–1937) aka “Student” – he of Student'st-table and test of statistical significance – rejected artificial rules about sample size, experimental design, and the level of significance, and took instead an economic approach to the logic of decisions made under uncertainty. In his job as Apprentice Brewer, Head Experimental Brewer, and finally Head Brewer of Guinness, Student produced small samples of experimental barley, malt, and hops, seeking guidance for industrial quality control and maximum expected profit at the large scale brewery. In the process Student invented or inspired half of modern statistics. This article draws on original archival evidence, shedding light on several core yet neglected aspects of Student's methods, that is, Guinnessometrics, not discussed by Ronald A. Fisher (1890–1962). The focus is on Student's small sample, economic approach to real error minimization, particularly in field and laboratory experiments he conducted on barley and malt, 1904 to 1937. Balanced designs of experiments, he found, are more efficient than random and have higher power to detect large and real treatment differences in a series of repeated and independent experiments. Student's world-class achievement poses a challenge to every science. Should statistical methods – such as the choice of sample size, experimental design, and level of significance – follow the purpose of the experiment, rather than the other way around? (JEL classification codes: C10, C90, C93, L66)


PEDIATRICS ◽  
1989 ◽  
Vol 83 (3) ◽  
pp. A72-A72
Author(s):  
Student

The believer in the law of small numbers practices science as follows: 1. He gambles his research hypotheses on small samples without realizing that the odds against him are unreasonably high. He overestimates power. 2. He has undue confidence in early trends (e.g., the data of the first few subjects) and in the stability of observed patterns (e.g., the number and identity of significant results). He overestimates significance. 3. In evaluating replications, his or others', he has unreasonably high expectations about the replicability of significant results. He underestimates the breadth of confidence intervals. 4. He rarely attributes a deviation of results from expectations to sampling variability, because he finds a causal "explanation" for any discrepancy. Thus, he has little opportunity to recognize sampling variation in action. His belief in the law of small numbers, therefore, will forever remain intact.


2017 ◽  
Vol 17 (9) ◽  
pp. 1623-1629 ◽  
Author(s):  
Berry Boessenkool ◽  
Gerd Bürger ◽  
Maik Heistermann

Abstract. High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.


Author(s):  
J. Mullaert ◽  
M. Bouaziz ◽  
Y. Seeleuthner ◽  
B. Bigio ◽  
J-L. Casanova ◽  
...  

AbstractMany methods for rare variant association studies require permutations to assess the significance of tests. Standard permutations assume that all individuals are exchangeable and do not take population stratification (PS), a known confounding factor in genetic studies, into account. We propose a novel strategy, LocPerm, in which individuals are permuted only with their closest ancestry-based neighbors. We performed a simulation study, focusing on small samples, to evaluate and compare LocPerm with standard permutations and classical adjustment on first principal components. Under the null hypothesis, LocPerm was the only method providing an acceptable type I error, regardless of sample size and level of stratification. The power of LocPerm was similar to that of standard permutation in the absence of PS, and remained stable in different PS scenarios. We conclude that LocPerm is a method of choice for taking PS and/or small sample size into account in rare variant association studies.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Vahid Ebrahimi ◽  
Zahra Bagheri ◽  
Zahra Shayan ◽  
Peyman Jafari

Assessing differential item functioning (DIF) using the ordinal logistic regression (OLR) model highly depends on the asymptotic sampling distribution of the maximum likelihood (ML) estimators. The ML estimation method, which is often used to estimate the parameters of the OLR model for DIF detection, may be substantially biased with small samples. This study is aimed at proposing a new application of the elastic net regularized OLR model, as a special type of machine learning method, for assessing DIF between two groups with small samples. Accordingly, a simulation study was conducted to compare the powers and type I error rates of the regularized and nonregularized OLR models in detecting DIF under various conditions including moderate and severe magnitudes of DIF ( DIF = 0.4   and   0.8 ), sample size ( N ), sample size ratio ( R ), scale length ( I ), and weighting parameter ( w ). The simulation results revealed that for I = 5 and regardless of R , the elastic net regularized OLR model with w = 0.1 , as compared with the nonregularized OLR model, increased the power of detecting moderate uniform DIF ( DIF = 0.4 ) approximately 35% and 21% for N = 100   and   150 , respectively. Moreover, for I = 10 and severe uniform DIF ( DIF = 0.8 ), the average power of the elastic net regularized OLR model with 0.03 ≤ w ≤ 0.06 , as compared with the nonregularized OLR model, increased approximately 29.3% and 11.2% for N = 100   and   150 , respectively. In these cases, the type I error rates of the regularized and nonregularized OLR models were below or close to the nominal level of 0.05. In general, this simulation study showed that the elastic net regularized OLR model outperformed the nonregularized OLR model especially in extremely small sample size groups. Furthermore, the present research provided a guideline and some recommendations for researchers who conduct DIF studies with small sample sizes.


1967 ◽  
Vol 11 ◽  
pp. 185-190
Author(s):  
George H. Glade

AbstractManufacture of reed switches, critical components in present-day dataprocessing support devices, requires a means of accurate, rapid analysis of elements used in plating tlie levers of the switch. Because of gready reduced feedback time, X-ray spectroscopy has replaced metallographic sectioning and optical measurement as a plating-thickness control method. While 6 hr were required to obtain thickness data for a given sample size by sectioning, X-ray spectroscopy requires only 2 hr, which permits better control of the plating operating. X-ray spectroscopy is now used routinely to control both gold and rhodium plating thicknesses in the 20- to 100μin. (1 × 10−6) thickness range. The large number of samples prevents long count duration, while the small sample size (0.110 by 0.033 in.) reduces the precision of the analysis. However, the precision of the X-ray and optical methods is approximately the same, 8% variance. X-ray accuracy is comparable to that of sectioning since the standards are obtained by sectioning. Simplicity of operation is required since relatively untrained operators are used. An aperture system is used to reduce background. The rhodium thickness measurement is obtained from gross rhodium intensity. Attenuation of gross nickel intensity from the base material was found to be a better measure of gold thickness intensity. Calibration for hoth gold and rhodium is performed by using the same wide detector conditions. The choice of analysis is made by changing only the 2θ angle, thus avoiding the time required for recalibration when changing analysis.


Author(s):  
Yong Zhang ◽  
Chao Wang ◽  
Xin Lin ◽  
Guanjun Liu ◽  
Peng Yang ◽  
...  

Testability demonstration tests can effectively verify product capabilities of fault detection and isolation; however, they suffer from insufficient samples, long cycles and high costs due to destructiveness of fault injection tests, which leads to an increasing demand for small sample tests. The sequential posterior odd test can effectively reduce sample sizes, but test results can be random and the sample size may be large. In this article, a censored sequential posterior odd test method is proposed, which can control the incremental risk caused by forced censoring within a contracted range by risk splitting. The development process of the testability demonstration test based on the censored sequential posterior odd test is designed. The number of censored tests and the calculation method of the censored threshold are presented. The case application shows that with the same prior distribution and constraint parameters, the average sample size of the proposed method is smaller than that of the sequential posterior odd test and of the classical method considering risks for both producers and consumers. The presented method can further reduce the risk of misjudgment and the number of test samples, contributing to the reduction of the test cycle and costs.


2020 ◽  
Vol 11 ◽  
Author(s):  
Sanne C. Smid ◽  
Sonja D. Winter

When Bayesian estimation is used to analyze Structural Equation Models (SEMs), prior distributions need to be specified for all parameters in the model. Many popular software programs offer default prior distributions, which is helpful for novel users and makes Bayesian SEM accessible for a broad audience. However, when the sample size is small, those prior distributions are not always suitable and can lead to untrustworthy results. In this tutorial, we provide a non-technical discussion of the risks associated with the use of default priors in small sample contexts. We discuss how default priors can unintentionally behave as highly informative priors when samples are small. Also, we demonstrate an online educational Shiny app, in which users can explore the impact of varying prior distributions and sample sizes on model results. We discuss how the Shiny app can be used in teaching; provide a reading list with literature on how to specify suitable prior distributions; and discuss guidelines on how to recognize (mis)behaving priors. It is our hope that this tutorial helps to spread awareness of the importance of specifying suitable priors when Bayesian SEM is used with small samples.


2022 ◽  
Author(s):  
Mia S. Tackney ◽  
Tim Morris ◽  
Ian White ◽  
Clemence Leyrat ◽  
Karla Diaz-Ordaz ◽  
...  

Abstract Adjustment for baseline covariates in randomized trials has been shown to lead to gains in power and can protect against chance imbalances in covariates. For continuous covariates, there is a risk that the the form of the relationship between the covariate and outcome is misspecified when taking an adjusted approach. Using a simulation study focusing on small to medium-sized individually randomized trials, we explore whether a range of adjustment methods are robust to misspecification, either in the covariate-outcome relationship or through an omitted covariate-treatment interaction. Specifically, we aim to identify potential settings where G-computation, Inverse Probability of Treatment Weighting ( IPTW ), Augmented Inverse Probability of Treatment Weighting ( AIPTW ) and Targeted Maximum Likelihood Estimation ( TMLE ) offer improvement over the commonly used Analysis of Covariance ( ANCOVA ). Our simulations show that all adjustment methods are generally robust to model misspecification if adjusting for a few covariates, sample size is 100 or larger, and there are no covariate-treatment interactions. When there is a non-linear interaction of treatment with a skewed covariate and sample size is small, all adjustment methods can suffer from bias; however, methods that allow for interactions (such as G-computation with interaction and IPTW ) show improved results compared to ANCOVA . When there are a high number of covariates to adjust for, ANCOVA retains good properties while other methods suffer from under- or over-coverage. An outstanding issue for G-computation, IPTW and AIPTW in small samples is that standard errors are underestimated; development of small sample corrections is needed.


Sign in / Sign up

Export Citation Format

Share Document