scholarly journals Recommendations of scientifically sound and appropriate sampling plans for parenteral drug products

Author(s):  
Ian Aled Jones ◽  
Alex Bird ◽  
Nathaniel Lochrie

This paper gives recommendations for defining sampling plans/sizes that are statistically justified or based on published guidance for typical parenteral drug products Simple tables based on the ANSI/ASQ Z1.4 acceptance sample plans or other published guidance have been collated to aid organisations in selection of appropriate sample sizes/plans for routine drug product manufacture. Key Words: USP <1790>, visual inspection, AQL, power, sample size

2000 ◽  
Vol 83 (5) ◽  
pp. 1279-1284 ◽  
Author(s):  
Anders S Johansson ◽  
Thomas B Whitaker ◽  
Francis G Giesbrecht ◽  
Winston M Hagler ◽  
James H Young

Abstract The effects of changes in sample size and/or sample acceptance level on the performance of aflatoxin sampling plans for shelled corn were investigated. Six sampling plans were evaluated for a range of sample sizes and sample acceptance levels. For a given sample size, decreasing the sample acceptance level decreases the percentage of lots accepted while increasing the percentage of lots rejected at all aflatoxin concentrations, and decreases the average aflatoxin concentration in lots accepted and lots rejected. For a given sample size where the sample acceptance level decreases relative to a fixed regulatory guideline, the number of false positives increases and the number of false negatives decreases. For a given sample size where the sample acceptance level increases relative to a fixed regulatory guideline, the number of false positives decreases and the number of false negatives increases. For a given sample acceptance level, increasing the sample size increases the percentage of lots accepted at concentrations below the regulatory guideline while increasing the percentage of lots rejected at concentrations above the regulatory guideline, and decreases the average aflatoxin concentration in the lots accepted while increasing the average aflatoxin concentration in the rejected lots. For a given sample acceptance level that equals the regulatory guideline, increasing the sample size decreases misclassification of lots, both false positives and false negatives.


Methodology ◽  
2016 ◽  
Vol 12 (2) ◽  
pp. 61-71 ◽  
Author(s):  
Antoine Poncet ◽  
Delphine S. Courvoisier ◽  
Christophe Combescure ◽  
Thomas V. Perneger

Abstract. Many applied researchers are taught to use the t-test when distributions appear normal and/or sample sizes are large and non-parametric tests otherwise, and fear inflated error rates if the “wrong” test is used. In a simulation study (four tests: t-test, Mann-Whitney test, Robust t-test, Permutation test; seven sample sizes between 2 × 10 and 2 × 500; four distributions: normal, uniform, log-normal, bimodal; under the null and alternate hypotheses), we show that type 1 errors are well controlled in all conditions. The t-test is most powerful under the normal and the uniform distributions, the Mann-Whitney test under the lognormal distribution, and the robust t-test under the bimodal distribution. Importantly, even the t-test was more powerful under asymmetric distributions than under the normal distribution for the same effect size. It appears that normality and sample size do not matter for the selection of a test to compare two groups of same size and variance. The researcher can opt for the test that fits the scientific hypothesis the best, without fear of poor test performance.


1998 ◽  
Vol 17 (2) ◽  
pp. 287-295 ◽  
Author(s):  
David A. Mott ◽  
Jon C. Schommer ◽  
William R. Doucette ◽  
David H. Kreling

The authors describe the pharmaceutical utilization system and present the conceptual framework for agency theory. They then apply agency theory to the selection of pharmaceuticals and the role of drug formularies in drug selection. The use of drug formularies can increase the goal conflict and uncertainty related to the selection of drug products. The authors address public policy and research directions to suggest ways of reducing the level of goal conflict and uncertainty associated with drug selection. Recognition of agency relationships and the environment surrounding agency relationships appear to be important for the development and analysis of future policy regarding selection decisions pertaining to pharmaceuticals.


2008 ◽  
Vol 54 (4) ◽  
pp. 729-737 ◽  
Author(s):  
Mariska M G Leeflang ◽  
Karel G M Moons ◽  
Johannes B Reitsma ◽  
Aielko H Zwinderman

Abstract Background: Optimal cutoff values for tests results involving continuous variables are often derived in a data-driven way. This approach, however, may lead to overly optimistic measures of diagnostic accuracy. We evaluated the magnitude of the bias in sensitivity and specificity associated with data-driven selection of cutoff values and examined potential solutions to reduce this bias. Methods: Different sample sizes, distributions, and prevalences were used in a simulation study. We compared data-driven estimates of accuracy based on the Youden index with the true values and calculated the median bias. Three alternative approaches (assuming a specific distribution, leave-one-out, smoothed ROC curve) were examined for their ability to reduce this bias. Results: The magnitude of bias caused by data-driven optimization of cutoff values was inversely related to sample size. If the true values for sensitivity and specificity are both 84%, the estimates in studies with a sample size of 40 will be approximately 90%. If the sample size increases to 200, the estimates will be 86%. The distribution of the test results had little impact on the amount of bias when sample size was held constant. More robust methods of optimizing cutoff values were less prone to bias, but the performance deteriorated if the underlying assumptions were not met. Conclusions: Data-driven selection of the optimal cutoff value can lead to overly optimistic estimates of sensitivity and specificity, especially in small studies. Alternative methods can reduce this bias, but finding robust estimates for cutoff values and accuracy requires considerable sample sizes.


2021 ◽  
Vol 13 (3) ◽  
pp. 368
Author(s):  
Christopher A. Ramezan ◽  
Timothy A. Warner ◽  
Aaron E. Maxwell ◽  
Bradley S. Price

The size of the training data set is a major determinant of classification accuracy. Nevertheless, the collection of a large training data set for supervised classifiers can be a challenge, especially for studies covering a large area, which may be typical of many real-world applied projects. This work investigates how variations in training set size, ranging from a large sample size (n = 10,000) to a very small sample size (n = 40), affect the performance of six supervised machine-learning algorithms applied to classify large-area high-spatial-resolution (HR) (1–5 m) remotely sensed data within the context of a geographic object-based image analysis (GEOBIA) approach. GEOBIA, in which adjacent similar pixels are grouped into image-objects that form the unit of the classification, offers the potential benefit of allowing multiple additional variables, such as measures of object geometry and texture, thus increasing the dimensionality of the classification input data. The six supervised machine-learning algorithms are support vector machines (SVM), random forests (RF), k-nearest neighbors (k-NN), single-layer perceptron neural networks (NEU), learning vector quantization (LVQ), and gradient-boosted trees (GBM). RF, the algorithm with the highest overall accuracy, was notable for its negligible decrease in overall accuracy, 1.0%, when training sample size decreased from 10,000 to 315 samples. GBM provided similar overall accuracy to RF; however, the algorithm was very expensive in terms of training time and computational resources, especially with large training sets. In contrast to RF and GBM, NEU, and SVM were particularly sensitive to decreasing sample size, with NEU classifications generally producing overall accuracies that were on average slightly higher than SVM classifications for larger sample sizes, but lower than SVM for the smallest sample sizes. NEU however required a longer processing time. The k-NN classifier saw less of a drop in overall accuracy than NEU and SVM as training set size decreased; however, the overall accuracies of k-NN were typically less than RF, NEU, and SVM classifiers. LVQ generally had the lowest overall accuracy of all six methods, but was relatively insensitive to sample size, down to the smallest sample sizes. Overall, due to its relatively high accuracy with small training sample sets, and minimal variations in overall accuracy between very large and small sample sets, as well as relatively short processing time, RF was a good classifier for large-area land-cover classifications of HR remotely sensed data, especially when training data are scarce. However, as performance of different supervised classifiers varies in response to training set size, investigating multiple classification algorithms is recommended to achieve optimal accuracy for a project.


2013 ◽  
Vol 113 (1) ◽  
pp. 221-224 ◽  
Author(s):  
David R. Johnson ◽  
Lauren K. Bachan

In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size ( n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.


Sign in / Sign up

Export Citation Format

Share Document