Sampling Plans Which Approximately Minimize the Maximum Expected Sample Size

1964 ◽  
Vol 59 (305) ◽  
pp. 67-88 ◽  
Author(s):  
David Freeman ◽  
Lionel Weiss
2016 ◽  
Vol 5 (1) ◽  
pp. 39 ◽  
Author(s):  
Abbas Najim Salman ◽  
Maymona Ameen

<p>This paper is concerned with minimax shrinkage estimator using double stage shrinkage technique for lowering the mean squared error, intended for estimate the shape parameter (a) of Generalized Rayleigh distribution in a region (R) around available prior knowledge (a<sub>0</sub>) about the actual value (a) as initial estimate in case when the scale parameter (l) is known .</p><p>In situation where the experimentations are time consuming or very costly, a double stage procedure can be used to reduce the expected sample size needed to obtain the estimator.</p><p>The proposed estimator is shown to have smaller mean squared error for certain choice of the shrinkage weight factor y(<strong>×</strong>) and suitable region R.</p><p>Expressions for Bias, Mean squared error (MSE), Expected sample size [E (n/a, R)], Expected sample size proportion [E(n/a,R)/n], probability for avoiding the second sample and percentage of overall sample saved  for the proposed estimator are derived.</p><p>Numerical results and conclusions for the expressions mentioned above were displayed when the consider estimator are testimator of level of significanceD.</p><p>Comparisons with the minimax estimator and with the most recent studies were made to shown the effectiveness of the proposed estimator.</p>


2015 ◽  
Vol 78 (7) ◽  
pp. 1370-1374
Author(s):  
ANDREAS KIERMEIER ◽  
JOHN SUMNER ◽  
IAN JENSON

Australia exports about 150,000 to 200,000 tons of manufacturing beef to the United States annually. Each lot is tested for Escherichia coli O157 using the N-60 sampling protocol, where 60 small pieces of surface meat from each lot of production are tested. A risk assessment of E. coli O157 illness from the consumption of hamburgers made from Australian manufacturing meat formed the basis to evaluate the effect of sample size and amount on the number of illnesses predicted. The sampling plans evaluated included no sampling (resulting in an estimated 55.2 illnesses per annum), the current N-60 plan (50.2 illnesses), N-90 (49.6 illnesses), N-120 (48.4 illnesses), and a more stringent N-60 sampling plan taking five 25-g samples from each of 12 cartons (47.4 illnesses per annum). While sampling may detect some highly contaminated lots, it does not guarantee that all such lots are removed from commerce. It is concluded that increasing the sample size or sample amount from the current N-60 plan would have a very small public health effect.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 193 ◽  
Author(s):  
Muhammad Aslam ◽  
Mansour Sattam Aldosari

The existing sampling plans which use the coefficient of variation (CV) are designed under classical statistics. These available sampling plans cannot be used for sentencing if the sample or the population has indeterminate, imprecise, unknown, incomplete or uncertain data. In this paper, we introduce the neutrosophic coefficient of variation (NCV) first. We design a sampling plan based on the NCV. The neutrosophic operating characteristic (NOC) function is then given and used to determine the neutrosophic plan parameters under some constraints. The neutrosophic plan parameters such as neutrosophic sample size and neutrosophic acceptance number are determined through the neutrosophic optimization solution. We compare the efficiency of the proposed plan under the neutrosophic statistical interval method with the sampling plan under classical statistics. A real example which has indeterminate data is given to illustrate the proposed plan.


2016 ◽  
Vol 48 (1) ◽  
pp. 23
Author(s):  
A. Arbab ◽  
F. Mirphakhar

The distribution of adult and larvae <em>Bactrocera oleae</em> (Diptera: Tephritidae), a key pest of olive, was studied in olive orchards. The first objective was to analyze the dispersion of this insect on olive and the second was to develop sampling plans based on fixed levels of precision for estimating <em>B. oleae</em> populations. The Taylor’s power law and Iwao’s patchiness regression models were used to analyze the data. Our results document that Iwao’s patchiness provided a better description between variance and mean density. Taylor’s <em>b</em> and Iwao’s <em>β</em> were both significantly more than 1, indicating that adults and larvae had aggregated spatial distribution. This result was further supported by the calculated common <em>k</em> of 2.17 and 4.76 for adult and larvae, respectively. Iwao’s a for larvae was significantly less than 0, indicating that the basic distribution component of <em>B. oleae</em> is the individual insect. Optimal sample sizes for fixed precision levels of 0.10 and 0.25 were estimated with Iwao’s patchiness coefficients. The optimum sample size for adult and larvae fluctuated throughout the seasons and depended upon the fly density and desired level of precision. For adult, this generally ranged from 2 to 11 and 7 to 15 traps to achieve precision levels of 0.25 and 0.10, respectively. With respect to optimum sample size, the developed fixed-precision sequential sampling plans was suitable for estimating flies density at a precision level of D=0.25. Sampling plans, presented here, should be a tool for research on pest management decisions of <em>B. oleae</em>.


2007 ◽  
Vol 90 (4) ◽  
pp. 1028-1035 ◽  
Author(s):  
Guner Ozay ◽  
Ferda Seyhan ◽  
Aysun Yilmaz ◽  
Thomas B Whitaker ◽  
Andrew B Slate ◽  
...  

Abstract About 100 countries have established regulatory limits for aflatoxin in food and feeds. Because these limits vary widely among regulating countries, the Codex Committee on Food Additives and Contaminants began work in 2004 to harmonize aflatoxin limits and sampling plans for aflatoxin in almonds, pistachios, hazelnuts, and Brazil nuts. Studies were developed to measure the uncertainty and distribution among replicated sample aflatoxin test results taken from aflatoxin-contaminated treenut lots. The uncertainty and distribution information is used to develop a model that can evaluate the performance (risk of misclassifying lots) of aflatoxin sampling plan designs for treenuts. Once the performance of aflatoxin sampling plans can be predicted, they can be designed to reduce the risks of misclassifying lots traded in either the domestic or export markets. A method was developed to evaluate the performance of sampling plans designed to detect aflatoxin in hazelnuts lots. Twenty hazelnut lots with varying levels of contamination were sampled according to an experimental protocol where 16 test samples were taken from each lot. The observed aflatoxin distribution among the 16 aflatoxin sample test results was compared to lognormal, compound gamma, and negative binomial distributions. The negative binomial distribution was selected to model aflatoxin distribution among sample test results because it gave acceptable fits to observed distributions among sample test results taken from a wide range of lot concentrations. Using the negative binomial distribution, computer models were developed to calculate operating characteristic curves for specific aflatoxin sampling plan designs. The effect of sample size and accept/reject limits on the chances of rejecting good lots (sellers' risk) and accepting bad lots (buyers' risk) was demonstrated for various sampling plan designs.


Biostatistics ◽  
2019 ◽  
Author(s):  
Jon Arni Steingrimsson ◽  
Joshua Betz ◽  
Tianchen Qian ◽  
Michael Rosenblum

Summary We consider the problem of designing a confirmatory randomized trial for comparing two treatments versus a common control in two disjoint subpopulations. The subpopulations could be defined in terms of a biomarker or disease severity measured at baseline. The goal is to determine which treatments benefit which subpopulations. We develop a new class of adaptive enrichment designs tailored to solving this problem. Adaptive enrichment designs involve a preplanned rule for modifying enrollment based on accruing data in an ongoing trial. At the interim analysis after each stage, for each subpopulation, the preplanned rule may decide to stop enrollment or to stop randomizing participants to one or more study arms. The motivation for this adaptive feature is that interim data may indicate that a subpopulation, such as those with lower disease severity at baseline, is unlikely to benefit from a particular treatment while uncertainty remains for the other treatment and/or subpopulation. We optimize these adaptive designs to have the minimum expected sample size under power and Type I error constraints. We compare the performance of the optimized adaptive design versus an optimized nonadaptive (single stage) design. Our approach is demonstrated in simulation studies that mimic features of a completed trial of a medical device for treating heart failure. The optimized adaptive design has $25\%$ smaller expected sample size compared to the optimized nonadaptive design; however, the cost is that the optimized adaptive design has $8\%$ greater maximum sample size. Open-source software that implements the trial design optimization is provided, allowing users to investigate the tradeoffs in using the proposed adaptive versus standard designs.


1992 ◽  
Vol 71 (1) ◽  
pp. 3-14 ◽  
Author(s):  
John E. Overall ◽  
Robert S. Atlas

A statistical model for combining p values from multiple tests of significance is used to define rejection and acceptance regions for two-stage and three-stage sampling plans. Type I error rates, power, frequencies of early termination decisions, and expected sample sizes are compared. Both the two-stage and three-stage procedures provide appropriate protection against Type I errors. The two-stage sampling plan with its single interim analysis entails minimal loss in power and provides substantial reduction in expected sample size as compared with a conventional single end-of-study test of significance for which power is in the adequate range. The three-stage sampling plan with its two interim analyses introduces somewhat greater reduction in power, but it compensates with greater reduction in expected sample size. Either interim-analysis strategy is more efficient than a single end-of-study analysis in terms of power per unit of sample size.


Sign in / Sign up

Export Citation Format

Share Document