scholarly journals Sample Sizes for Designing Bioequivalence Studies for Highly Variable Drugs

2011 ◽  
Vol 15 (1) ◽  
pp. 73 ◽  
Author(s):  
Laszlo Endrenyi ◽  
Laszlo Tothfalusi

Purpose. To provide tables of sample sizes which are required, by the European Medicines Agency (EMA) and the U.S. Food and Drug Administration (FDA), for the design of bioequivalence (BE) studies involving highly variable drugs. To elucidate the complicated features of the relationship between sample size and within-subject variation. Methods. 3- and 4-period studies were simulated with various sample sizes. They were evaluated, at various variations and various true ratios of the two geometric means (GMR), by the approaches of scaled average BE and by average BE with expanding limits. The sample sizes required for yielding 80% and 90% statistical powers were determined. Results. Because of the complicated regulatory expectations, the features of the required sample sizes are also complicated. When the true GMR = 1.0 then, without additional constraints, the sample size is independent of the intrasubject variation. When the true GMR is increased or decreased from 1.0 then the required sample sizes rise at above but close to 30% variation. An additional regulatory constraint on the point estimate of GMR and a cap on the use of expanding limits further increase the required sample size at high variations. Fewer subjects are required by the FDA than by the EMA procedures. Conclusions. The methods proposed by EMA and FDA lower the required sample sizes in comparison with unscaled average BE. However, each additional regulatory requirement (applying the mixed procedure, imposing a constraint on the point estimate of GMR, and using a cap on the application of expanding limits) raises the required number of subjects. This article is open to POST-PUBLICATION REVIEW. Registered readers (see “For Readers”) may comment by clicking on ABSTRACT on the issue’s contents page.

2018 ◽  
Vol 7 (6) ◽  
pp. 68
Author(s):  
Karl Schweizer ◽  
Siegbert Reiß ◽  
Stefan Troche

An investigation of the suitability of threshold-based and threshold-free approaches for structural investigations of binary data is reported. Both approaches implicitly establish a relationship between binary data following the binomial distribution on one hand and continuous random variables assuming a normal distribution on the other hand. In two simulation studies we investigated: whether the fit results confirm the establishment of such a relationship, whether the differences between correct and incorrect models are retained and to what degree the sample size influences the results. Both approaches proved to establish the relationship. Using the threshold-free approach it was achieved by customary ML estimation whereas robust ML estimation was necessary in the threshold-based approach. Discrimination between correct and incorrect models was observed for both approaches. Larger CFI differences were found for the threshold-free approach than for the threshold-based approach. Dependency on sample size characterized the threshold-based approach but not the threshold-free approach. The threshold-based approach tended to perform better in large sample sizes, while the threshold-free approach performed better in smaller sample sizes.


2009 ◽  
Vol 31 (4) ◽  
pp. 500-506 ◽  
Author(s):  
Robert Slavin ◽  
Dewi Smith

Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of the Best Evidence Encyclopedia. As predicted, there was a significant negative correlation between sample size and effect size. The differences in effect sizes between small and large experiments were much greater than those between randomized and matched experiments. Explanations for the effects of sample size on effect size are discussed.


2009 ◽  
Vol 12 (1) ◽  
pp. 138 ◽  
Author(s):  
Laszlo Endrenyi ◽  
Laszlo Tothfalusi

Purpose. The FDA Working Group on Highly Variable (HV) Drugs recently presented interim procedures and conditions for determining the bioequivalence (BE) of HV drug products. They included analysis by the method of scaled average BE (SABE), a switching coefficient of variation of CVS = 30% and a regulatory standardized variation of CV0 = 25% for applying SABE, and the use of a secondary regulatory criterion restricting to 0.80-1.25 the point estimate for the ratio of estimated geometric means (GMR) of the two formulations. These conditions are scrutinized in the present communication. Methods. 3-period BE studies were simulated with various statistical and regulatory assumptions. Power curves, obtained by gradually increasing the true GMR, compared performances of the methods of SABE, a constrained point estimate of GMR (PE/GMR), and the composite of these two approaches. The consumer risk of each procedure was evaluated. Results. With CV0 = 30% and PE/GMR = 0.80-1.25, the composite criterion of BE relied on the confidence limits of SABE. In contrast, with CV0 = 25% and/or PE/GMR = 0.87-1.15, the composite criterion approached almost completely the features of the GMR point estimate, especially at high within-subject variation. The consumer risk was near 5% with CV0 = 30% but much higher when CV0 = 25%. Conclusions. The condition of CVS = CV0 = 30% and PE/GMR = 0.80-1.25 is recommended as a composite regulatory criterion. With alternative settings of the conditions, such as the recommended CV0 = 25% and/or PE/GMR = 0.87-1.15, the composite criterion would reflect almost entirely the GMR point estimate. This would be an undesirable outcome.


2006 ◽  
Vol 6 (2) ◽  
pp. 31-37
Author(s):  
K. Ohno ◽  
E. Kadota ◽  
Y. Kondo ◽  
T. Kamei ◽  
Y. Magara

The cancer risks posed by ten substances in raw and purified water were estimated for each municipality in Japan to compare risks between raw and purified water, and inter-municipality. Water concentrations were estimated by use of statistical data. Assigning cancer unit risks to each substance and applying the assumption of additive toxicological effects to multiple carcinogens, total cancer risks of the waters were estimated. As a result, the geometric means of total cancer risks in raw and purified water were 1.16×10−5 and 2.18×10−5, respectively. In raw water, the contribution ratio of arsenic to total cancer risk accounted for 97%. In purified water, that of four trihalomethanes (THMs) accounted for 54%. The increase of total cancer risks in purified water was due to THMs. In regard to the geographical variation, the relationship between population size and total cancer risks were investigated. The result was that there were higher cancer risks in the big cities with the population more than a million both in raw and purified water. One plausible reason for the higher risks in purified water in the big cities is a larger chlorination dose due to the huge water supply areas. The reason for the increase in raw water remained unclear.


2021 ◽  
Vol 99 (Supplement_1) ◽  
pp. 218-219
Author(s):  
Andres Fernando T Russi ◽  
Mike D Tokach ◽  
Jason C Woodworth ◽  
Joel M DeRouchey ◽  
Robert D Goodband ◽  
...  

Abstract The swine industry has been constantly evolving to select animals with improved performance traits and to minimize variation in body weight (BW) in order to meet packer specifications. Therefore, understanding variation presents an opportunity for producers to find strategies that could help reduce, manage, or deal with variation of pigs in a barn. A systematic review and meta-analysis was conducted by collecting data from multiple studies and available data sets in order to develop prediction equations for coefficient of variation (CV) and standard deviation (SD) as a function of BW. Information regarding BW variation from 16 papers was recorded to provide approximately 204 data points. Together, these data included 117,268 individually weighed pigs with a sample size that ranged from 104 to 4,108 pigs. A random-effects model with study used as a random effect was developed. Observations were weighted using sample size as an estimate for precision on the analysis, where larger data sets accounted for increased accuracy in the model. Regression equations were developed using the nlme package of R to determine the relationship between BW and its variation. Polynomial regression analysis was conducted separately for each variation measurement. When CV was reported in the data set, SD was calculated and vice versa. The resulting prediction equations were: CV (%) = 20.04 – 0.135 × (BW) + 0.00043 × (BW)2, R2=0.79; SD = 0.41 + 0.150 × (BW) - 0.00041 × (BW)2, R2 = 0.95. These equations suggest that there is evidence for a decreasing quadratic relationship between mean CV of a population and BW of pigs whereby the rate of decrease is smaller as mean pig BW increases from birth to market. Conversely, the rate of increase of SD of a population of pigs is smaller as mean pig BW increases from birth to market.


Vaccines ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 693
Author(s):  
Harald Walach ◽  
Rainer J. Klement ◽  
Wouter Aukema

Background: COVID-19 vaccines have had expedited reviews without sufficient safety data. We wanted to compare risks and benefits. Method: We calculated the number needed to vaccinate (NNTV) from a large Israeli field study to prevent one death. We accessed the Adverse Drug Reactions (ADR) database of the European Medicines Agency and of the Dutch National Register (lareb.nl) to extract the number of cases reporting severe side effects and the number of cases with fatal side effects. Result: The NNTV is between 200–700 to prevent one case of COVID-19 for the mRNA vaccine marketed by Pfizer, while the NNTV to prevent one death is between 9000 and 50,000 (95% confidence interval), with 16,000 as a point estimate. The number of cases experiencing adverse reactions has been reported to be 700 per 100,000 vaccinations. Currently, we see 16 serious side effects per 100,000 vaccinations, and the number of fatal side effects is at 4.11/100,000 vaccinations. For three deaths prevented by vaccination we have to accept two inflicted by vaccination. Conclusions: This lack of clear benefit should cause governments to rethink their vaccination policy.


Author(s):  
Joseph Pryce ◽  
Lisa J Reimer

Abstract Background Molecular xenomonitoring (MX), the detection of pathogen DNA in mosquitoes, is a recommended approach to support lymphatic filariasis (LF) elimination efforts. Potential roles of MX include detecting presence of LF in communities and quantifying progress towards elimination of the disease. However, the relationship between MX results and human prevalence is poorly understood. Methods :We conducted a systematic review and meta-analysis from all previously conducted studies that reported the prevalence of filarial DNA in wild-caught mosquitoes (MX rate) and the corresponding prevalence of microfilaria (mf) in humans. We calculated a pooled estimate of MX sensitivity for detecting positive communities at a range of mf prevalence values and mosquito sample sizes. We conducted a linear regression to evaluate the relationship between mf prevalence and MX rate. Results We identified 24 studies comprising 144 study communities. MX had an overall sensitivity of 98.3% (95% CI 41.5, 99.9%) and identified 28 positive communities that were negative in the mf survey. Low sensitivity in some studies was attributed to small mosquito sample sizes (<1,000) and very low mf prevalence (<0.25%). Human mf prevalence and mass drug administration status accounted for approximately half of the variation in MX rate (R 2 = 0.49, p<0.001). Data from longitudinal studies showed that, within a given study area, there is a strong linear relationship between MX rate and mf prevalence (R 2 = 0.78, p < 0.001). Conclusion MX shows clear potential as tool for detecting communities where LF is present and as a predictor of human mf prevalence.


2021 ◽  
Vol 13 (3) ◽  
pp. 368
Author(s):  
Christopher A. Ramezan ◽  
Timothy A. Warner ◽  
Aaron E. Maxwell ◽  
Bradley S. Price

The size of the training data set is a major determinant of classification accuracy. Nevertheless, the collection of a large training data set for supervised classifiers can be a challenge, especially for studies covering a large area, which may be typical of many real-world applied projects. This work investigates how variations in training set size, ranging from a large sample size (n = 10,000) to a very small sample size (n = 40), affect the performance of six supervised machine-learning algorithms applied to classify large-area high-spatial-resolution (HR) (1–5 m) remotely sensed data within the context of a geographic object-based image analysis (GEOBIA) approach. GEOBIA, in which adjacent similar pixels are grouped into image-objects that form the unit of the classification, offers the potential benefit of allowing multiple additional variables, such as measures of object geometry and texture, thus increasing the dimensionality of the classification input data. The six supervised machine-learning algorithms are support vector machines (SVM), random forests (RF), k-nearest neighbors (k-NN), single-layer perceptron neural networks (NEU), learning vector quantization (LVQ), and gradient-boosted trees (GBM). RF, the algorithm with the highest overall accuracy, was notable for its negligible decrease in overall accuracy, 1.0%, when training sample size decreased from 10,000 to 315 samples. GBM provided similar overall accuracy to RF; however, the algorithm was very expensive in terms of training time and computational resources, especially with large training sets. In contrast to RF and GBM, NEU, and SVM were particularly sensitive to decreasing sample size, with NEU classifications generally producing overall accuracies that were on average slightly higher than SVM classifications for larger sample sizes, but lower than SVM for the smallest sample sizes. NEU however required a longer processing time. The k-NN classifier saw less of a drop in overall accuracy than NEU and SVM as training set size decreased; however, the overall accuracies of k-NN were typically less than RF, NEU, and SVM classifiers. LVQ generally had the lowest overall accuracy of all six methods, but was relatively insensitive to sample size, down to the smallest sample sizes. Overall, due to its relatively high accuracy with small training sample sets, and minimal variations in overall accuracy between very large and small sample sets, as well as relatively short processing time, RF was a good classifier for large-area land-cover classifications of HR remotely sensed data, especially when training data are scarce. However, as performance of different supervised classifiers varies in response to training set size, investigating multiple classification algorithms is recommended to achieve optimal accuracy for a project.


2015 ◽  
Vol 116 (9/10) ◽  
pp. 564-577 ◽  
Author(s):  
RISHABH SHRIVASTAVA ◽  
Preeti Mahajan

Purpose – The purpose of this paper is twofold. First, the study aims to investigate the relationship between the altmetric indicators from ResearchGate (RG) and the bibliometric indicators from the Scopus database. Second, the study seeks to examine the relationship amongst the RG altmetric indicators themselves. RG is a rich source of altmetric indicators such as Citations, RGScore, Impact Points, Profile Views, Publication Views, etc. Design/methodology/approach – For establishing whether RG metrics showed the same results as the established sources of metrics, Pearson’s correlation coefficients were calculated between the metrics provided by RG and the metrics obtained from Scopus. Pearson’s correlation coefficients were also calculated for the metrics provided by RG. The data were collected by visiting the profile pages of all the members who had an account in RG under the Department of Physics, Panjab University, Chandigarh (India). Findings – The study showed that most of the RG metrics showed strong positive correlation with the Scopus metrics, except for RGScore (RG) and Citations (Scopus), which showed moderate positive correlation. It was also found that the RG metrics showed moderate to strong positive correlation amongst each other. Research limitations/implications – The limitation of this study is that more and more scientists and researchers may join RG in the future, therefore the data may change. The study focuses on the members who had an account in RG under the Department of Physics, Panjab University, Chandigarh (India). Perhaps further studies can be conducted by increasing the sample size and by taking a different sample size having different characteristics. Originality/value – Being an emerging field, not much has been conducted in the area of altmetrics. Very few studies have been conducted on the reach of academic social networks like RG and their validity as sources of altmetric indicators like RGScore, Impact Points, etc. The findings offer insights to the question whether RG can be used as an alternative to traditional sources of bibliometric indicators, especially with reference to a rapidly developing country such as India.


2013 ◽  
Vol 113 (1) ◽  
pp. 221-224 ◽  
Author(s):  
David R. Johnson ◽  
Lauren K. Bachan

In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size ( n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.


Sign in / Sign up

Export Citation Format

Share Document