Asymptotic optimality of binomial plans of fixed sample size in the class of plans with bounded average sample size

1978 ◽  
Vol 9 (6) ◽  
pp. 910-914
Author(s):  
P. N. Sapozhnikov
2019 ◽  
Author(s):  
Peter E Clayson ◽  
Kaylie Amanda Carbine ◽  
Scott Baldwin ◽  
Michael J. Larson

Methodological reporting guidelines for studies of event-related potentials (ERPs) were updated in Psychophysiology in 2014. These guidelines facilitate the communication of key methodological parameters (e.g., preprocessing steps). Failing to report key parameters represents a barrier to replication efforts, and difficultly with replicability increases in the presence of small sample sizes and low statistical power. We assessed whether guidelines are followed and estimated the average sample size and power in recent research. Reporting behavior, sample sizes, and statistical designs were coded for 150 randomly-sampled articles from five high-impact journals that frequently publish ERP research from 2011 to 2017. An average of 63% of guidelines were reported, and reporting behavior was similar across journals, suggesting that gaps in reporting is a shortcoming of the field rather than any specific journal. Publication of the guidelines paper had no impact on reporting behavior, suggesting that editors and peer reviewers are not enforcing these recommendations. The average sample size per group was 21. Statistical power was conservatively estimated as .72-.98 for a large effect size, .35-.73 for a medium effect, and .10-.18 for a small effect. These findings indicate that failing to report key guidelines is ubiquitous and that ERP studies are primarily powered to detect large effects. Such low power and insufficient following of reporting guidelines represent substantial barriers to replication efforts. The methodological transparency and replicability of studies can be improved by the open sharing of processing code and experimental tasks and by a priori sample size calculations to ensure adequately powered studies.


1989 ◽  
Vol 26 (02) ◽  
pp. 304-313 ◽  
Author(s):  
T. S. Ferguson ◽  
J. P. Hardwick

A manuscript with an unknown random numberMof misprints is subjected to a series of proofreadings in an effort to detect and correct the misprints. On thenthproofreading, each remaining misprint is detected independently with probabilitypn– 1. Each proofreading costs an amountCP> 0, and if one stops afternproofreadings, each misprint overlooked costs an amountcn> 0. Two models are treated based on the distribution ofM.In the Poisson model, the optimal stopping rule is seen to be a fixed sample size rule. In the binomial model, the myopic rule is optimal in many important cases. A generalization is made to problems in which individual misprints may have distinct probabilities of detection and distinct overlook costs.


2008 ◽  
pp. 1464-1464
Author(s):  
E. S. Krafsur ◽  
R. D. Moon ◽  
R. Albajes ◽  
O. Alomar ◽  
Elisabetta Chiappini ◽  
...  

2021 ◽  
pp. 174702182110440
Author(s):  
Janine Hoffart ◽  
Jana Jarecki ◽  
Gilles Dutilh ◽  
Jörg Rieskamp

People often learn from experience about the distribution of outcomes of risky options. Typically, people draw small samples, when they can actively sample information from risky gambles to make decisions. We examine how the size of the sample that people experience in decision from experience affects their preferences between risky options. In two studies (N=40 each) we manipulated the size of samples that people could experience from risky gambles and measured subjective selling prices and the confidence in selling price judgments after sampling. The results show that, on average, sample size influenced neither the selling prices nor confidence. However, cognitive modeling of individual-level learning showed that most participants could be classified as Bayesian learners, whereas the minority adhered to a frequentist learning strategy and that if learning was cognitively simpler more participants adhered to the latter. The observed selling prices of Bayesian learners changed with sample size as predicted by Bayesian principles, whereas sample size affected the judgments of frequentist learners much less. These results illustrate the variability in how people learn from sampled information and provide an explanation for why sample size often does not affect judgments.


2017 ◽  
Vol 6 (1-2) ◽  
pp. 169
Author(s):  
A. H. Abd Ellah

We consider the problem of predictive interval for the range of the future observations from an exponential distribution. Two cases are considered, (1) Fixed sample size (FSS). (2) Random sample size (RSS). Further, I derive the predictive function for both FSS and RSS in closely forms. Random sample size is appeared in many application of life testing. Fixed sample size is a special case from the case of random sample size. Illustrative examples are given. Factors of the predictive distribution are given. A comparison in savings is made with the above method. To show the applications of our results, we present some simulation experiments. Finally, we apply our results to some real data sets in life testing.


Sign in / Sign up

Export Citation Format

Share Document