A median run length-based double-sampling X ¯ $$ \overline{X} $$ chart with estimated parameters for minimizing the average sample size

2015 ◽  
Vol 80 (1-4) ◽  
pp. 411-426 ◽  
Author(s):  
W. L. Teoh ◽  
Michael B. C. Khoo ◽  
Philippe Castagliola ◽  
S. Chakraborti
1995 ◽  
Vol 8 (1) ◽  
pp. 117-127 ◽  
Author(s):  
James M. Grayson ◽  
George C. Runger ◽  
Douglas C. Montgomery

2019 ◽  
Author(s):  
Peter E Clayson ◽  
Kaylie Amanda Carbine ◽  
Scott Baldwin ◽  
Michael J. Larson

Methodological reporting guidelines for studies of event-related potentials (ERPs) were updated in Psychophysiology in 2014. These guidelines facilitate the communication of key methodological parameters (e.g., preprocessing steps). Failing to report key parameters represents a barrier to replication efforts, and difficultly with replicability increases in the presence of small sample sizes and low statistical power. We assessed whether guidelines are followed and estimated the average sample size and power in recent research. Reporting behavior, sample sizes, and statistical designs were coded for 150 randomly-sampled articles from five high-impact journals that frequently publish ERP research from 2011 to 2017. An average of 63% of guidelines were reported, and reporting behavior was similar across journals, suggesting that gaps in reporting is a shortcoming of the field rather than any specific journal. Publication of the guidelines paper had no impact on reporting behavior, suggesting that editors and peer reviewers are not enforcing these recommendations. The average sample size per group was 21. Statistical power was conservatively estimated as .72-.98 for a large effect size, .35-.73 for a medium effect, and .10-.18 for a small effect. These findings indicate that failing to report key guidelines is ubiquitous and that ERP studies are primarily powered to detect large effects. Such low power and insufficient following of reporting guidelines represent substantial barriers to replication efforts. The methodological transparency and replicability of studies can be improved by the open sharing of processing code and experimental tasks and by a priori sample size calculations to ensure adequately powered studies.


2021 ◽  
pp. 174702182110440
Author(s):  
Janine Hoffart ◽  
Jana Jarecki ◽  
Gilles Dutilh ◽  
Jörg Rieskamp

People often learn from experience about the distribution of outcomes of risky options. Typically, people draw small samples, when they can actively sample information from risky gambles to make decisions. We examine how the size of the sample that people experience in decision from experience affects their preferences between risky options. In two studies (N=40 each) we manipulated the size of samples that people could experience from risky gambles and measured subjective selling prices and the confidence in selling price judgments after sampling. The results show that, on average, sample size influenced neither the selling prices nor confidence. However, cognitive modeling of individual-level learning showed that most participants could be classified as Bayesian learners, whereas the minority adhered to a frequentist learning strategy and that if learning was cognitively simpler more participants adhered to the latter. The observed selling prices of Bayesian learners changed with sample size as predicted by Bayesian principles, whereas sample size affected the judgments of frequentist learners much less. These results illustrate the variability in how people learn from sampled information and provide an explanation for why sample size often does not affect judgments.


2018 ◽  
Vol 8 (1) ◽  
pp. 120
Author(s):  
Steven B. Kim ◽  
Jeffrey O. Wand

In medical, health, and sports sciences, researchers desire a device with high reliability and validity. This article focuses on reliability and validity studies with n subjects and m ≥2 repeated measurements per subject. High statistical power can be achieved by increasing n or m, and increasing m is often easier than increasing n in practice unless m is too high to result in systematic bias. The sequential probability ratio test (SPRT) is a useful statistical method which can conclude a null hypothesis H0 or an alternative hypothesis H1 with 50% of the required sample size of a non-sequential test on average. The traditional SPRT requires the likelihood function for each observed random variable, and it can be a practical burden for evaluating the likelihood ratio after each observation of a subject. Instead, m observed random variables per subject can be transformed into a test statistic which has a known sampling distribution under H0 and under H1. This allows us to formulate a SPRT based on a sequence of test statistics. In this article, three types of study are considered: reliabilityof a device, reliability of a device relative to a criterion device, and validity of a device relative to a  criterion device. Using SPRT for testing the reliability of a device, for small m, results in an average sample size of about 50% of the fixed sample size for a non-sequential test. For comparing a device to criterion, the average sample size approaches to 60% approximately as m increases. The SPRT tolerates violation of normality assumption for validity study, but it does not for reliability study.


2018 ◽  
Vol 2018 ◽  
pp. 1-6 ◽  
Author(s):  
Huay Woon You

A synthetic double sampling (SDS) chart is commonly evaluated based on the assumption that process parameters (namely, mean and standard deviation) are known. However, the process parameters are usually unknown and must be estimated from an in-control Phase-I dataset. This will lead to deterioration in the performance of a control chart. The average run length (ARL) has been implemented as the common performance measure in process monitoring of the SDS chart. Computation of ARL requires practitioners to determine shift size in advance. However, this requirement is too restricted as practitioners may not have the experience to specify the shift size in advance. Thus, the expected average run length (EARL) is introduced to assess the performance of the SDS chart when the shift size is random. In this paper, the SDS chart, with known and estimated process parameters, was evaluated based on EARL and compared with the performance measure, ARL.


2014 ◽  
Vol 67 ◽  
pp. 104-115 ◽  
Author(s):  
W.L. Teoh ◽  
Michael B.C. Khoo ◽  
Philippe Castagliola ◽  
S. Chakraborti

Sign in / Sign up

Export Citation Format

Share Document