scholarly journals Performance of Multiple-Batch Approaches to Pharmacokinetic Bioequivalence Testing for Orally Inhaled Drug Products with Batch-to-Batch Variability

2021 ◽  
Vol 22 (7) ◽  
Author(s):  
Elise Burmeister Getz ◽  
Kevin J. Carroll ◽  
J. David Christopher ◽  
Beth Morgan ◽  
Scott Haughie ◽  
...  

AbstractBatch-to-batch pharmacokinetic (PK) variability of orally inhaled drug products has been documented and can render single-batch PK bioequivalence (BE) studies unreliable; results from one batch may not be consistent with a repeated study using a different batch, yet the goal of PK BE is to deliver a product comparison that is interpretable beyond the specific batches used in the study. We characterized four multiple-batch PK BE approaches to improve outcome reliability without increasing the number of clinical study participants. Three approaches include multiple batches directly in the PK BE study with batch identity either excluded from the statistical model (“Superbatch”) or included as a fixed or random effect (“Fixed Batch Effect,” “Random Batch Effect”). A fourth approach uses a bio-predictive in vitro test to screen candidate batches, bringing the median batch of each product into the PK BE study (“Targeted Batch”). Three of these approaches (Fixed Batch Effect, Superbatch, Targeted Batch) continue the single-batch PK BE convention in which uncertainty in the Test/Reference ratio estimate due to batch sampling is omitted from the Test/Reference confidence interval. All three of these approaches provided higher power to correctly identify true bioequivalence than the standard single-batch approach with no increase in clinical burden. False equivalence (type I) error was inflated above the expected 5% level, but multiple batches controlled type I error better than a single batch. The Random Batch Effect approach restored 5% type I error, but had low power for small (e.g., <8) batch sample sizes using standard [0.8000, 1.2500] bioequivalence limits.

Author(s):  
Shengjie Liu ◽  
Jun Gao ◽  
Yuling Zheng ◽  
Lei Huang ◽  
Fangrong Yan

AbstractBioequivalence (BE) studies are an integral component of new drug development process, and play an important role in approval and marketing of generic drug products. However, existing design and evaluation methods are basically under the framework of frequentist theory, while few implements Bayesian ideas. Based on the bioequivalence predictive probability model and sample re-estimation strategy, we propose a new Bayesian two-stage adaptive design and explore its application in bioequivalence testing. The new design differs from existing two-stage design (such as Potvin’s method B, C) in the following aspects. First, it not only incorporates historical information and expert information, but further combines experimental data flexibly to aid decision-making. Secondly, its sample re-estimation strategy is based on the ratio of the information in interim analysis to total information, which is simpler in calculation than the Potvin’s method. Simulation results manifested that the two-stage design can be combined with various stop boundary functions, and the results are different. Moreover, the proposed method saves sample size compared to the Potvin’s method under the conditions that type I error rate is below 0.05 and statistical power reaches 80 %.


2020 ◽  
Author(s):  
Brandon LeBeau

<p>The linear mixed model is a commonly used model for longitudinal or nested data due to its ability to account for the dependency of nested data. Researchers typically rely on the random effects to adequately account for the dependency due to correlated data, however serial correlation can also be used. If the random effect structure is misspecified (perhaps due to convergence problems), can the addition of serial correlation overcome this misspecification and allow for unbiased estimation and accurate inferences? This study explored this question with a simulation. Simulation results show that the fixed effects are unbiased, however inflation of the empirical type I error rate occurs when a random effect is missing from the model. Implications for applied researchers are discussed.</p>


Pharmaceutics ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1109
Author(s):  
Elham Amini ◽  
Abhinav Kurumaddali ◽  
Sharvari Bhagwat ◽  
Simon M. Berger ◽  
Günther Hochhaus

The aim of this study was to further evaluate and optimize the Transwell® system for assessing the dissolution behavior of orally inhaled drug products (OIDPs), using fluticasone propionate as a model drug. Sample preparation involved the collection of a relevant inhalable dose fraction through an anatomical mouth/throat model, resulting in a more uniform presentation of drug particles during the subsequent dissolution test. The method differed from previously published procedures by (1) using a 0.4 µm polycarbonate (PC) membrane, (2) stirring the receptor compartment, and (3) placing the drug-containing side of the filter paper face downwards, towards the PC membrane. A model developed in silico, paired with the results of in vitro studies, suggested that a dissolution medium providing a solubility of about 5 µg/mL would be a good starting point for the method’s development, resulting in mean transfer times that were about 10 times longer than those of a solution. Furthermore, the model suggested that larger donor/receptor and sampling volumes (3, 3.3 and 2 mL, respectively) will significantly reduce the so-called “mass effect”. The outcomes of this study shed further light on the impact of experimental conditions on the complex interplay of dissolution and diffusion within a volume-limited system, under non-sink conditions.


2018 ◽  
Author(s):  
Van Rynald T Liceralde ◽  
Peter C. Gordon

Power transforms have been increasingly used in linear mixed-effects models (LMMs) of chronometric data (e.g., response times [RTs]) as a statistical solution to preempt violating the assumption of residual normality. However, differences in results between LMMs fit to raw RTs and transformed RTs have reignited discussions on issues concerning the transformation of RTs. Here, we analyzed three word-recognition megastudies and performed Monte Carlo simulations to better understand the consequences of transforming RTs in LMMs. Within each megastudy, transforming RTs produced different fixed- and random-effect patterns; across the megastudies, RTs were optimally normalized by different power transforms, and results were more consistent among LMMs fit to raw RTs. Moreover, the simulations showed that LMMs fit to optimally normalized RTs had greater power for main effects in smaller samples, but that LMMs fit to raw RTs had greater power for interaction effects as sample sizes increased, with negligible differences in Type I error rates between the two models. Based on these results, LMMs should be fit to raw RTs when there is no compelling reason beyond nonnormality to transform RTs and when the interpretive framework mapping the predictors and RTs treats RT as an interval scale.


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e2885 ◽  
Author(s):  
Cajo J.F. ter Braak ◽  
Pedro Peres-Neto ◽  
Stéphane Dray

Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the twop-values (thepmaxtest). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked thepmaxtest using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an “omitted variable bias” problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-basedpmaxtest controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait-environment combinations, the fourth-corner test is shown to give almost the same results as the GLM-based tests in far less computing time.


2020 ◽  
Author(s):  
Sonika Singh ◽  
Christopher Tench ◽  
Radu Tanasescu ◽  
Cris Constantinescu

AbstractThe purpose of this coordinate based meta-analysis (CBMA) was to summarise the available evidence related to regional grey matter (GM) changes in patients with multiple sclerosis (MS) and clinically isolated syndrome (CIS). CBMA is a way to find the consistent results across multiple independent studies that are otherwise not easily comparable due to methodological differences. The coordinate based random effect size (CBRES) meta-analysis method utilizes the reported coordinates (foci of the clusters of GM loss) and Z score standardised by number of subjects, controlling type I error rate by false cluster discovery rate (FCDR). Thirty-four published articles reporting forty-five independent studies using voxel-based morphometry (VBM) for the assessment of GM atrophy between MS or CIS patients and healthy controls were identified from electronic databases. The primary meta-analysis identified clusters of spatially consistent cross-study reporting of GM atrophy; subgroup analyses and meta-regression were also performed. This meta-analysis demonstrates consistent areas of GM loss in MS or CIS, in the form of significant clusters. Some clusters also demonstrate correlation with disease duration.


2020 ◽  
Vol 22 (2) ◽  
Author(s):  
Robert Price ◽  
Jagdeep Shur ◽  
William Ganley ◽  
Gonçalo Farias ◽  
Nikoletta Fotaki ◽  
...  

2012 ◽  
Vol 55 (5) ◽  
pp. 506-518
Author(s):  
M. Mendeş

Abstract. This study was conducted to compare Type I error and test power of ANOVA, REML and ML methods by Monte Carlo simulation technique under different experimental conditions. Simulation results indicated that the variance ratios, sample size and number of groups were important factors in determining appropriate methods which were used to estimate variance components. The ML method was found slightly superior when compared to ANOVA and REML methods. On the other hand, ANOVA and REML methods generated similar results in general. As a results, regardless of distribution shapes and number of groups and if n<15; ML, REML methods might be preferred to the ANOVA. However, when either number of groups or sample size was increased (n≥15) ANOVA method may also be used along with ML and REML.


2021 ◽  
Author(s):  
Zilu Liu ◽  
Asuman Turkmen ◽  
Shili Lin

In genetic association studies with common diseases, population stratification is a major source of confounding. Principle component regression (PCR) and linear mixed model (LMM) are two commonly used approaches to account for population stratification. Previous studies have shown that LMM can be interpreted as including all principle components (PCs) as random-effect covariates. However, including all PCs in LMM may inflate type I error in some scenarios due to redundancy, while including only a few pre-selected PCs in PCR may fail to fully capture the genetic diversity. Here, we propose a statistical method under the Bayesian framework, Bayestrat, that utilizes appropriate shrinkage priors to shrink the effects of non- or minimally confounded PCs and improve the identification of highly confounded ones. Simulation results show that Bayestrat consistently achieves lower type I error rates yet higher power, especially when the number of PCs included in the model is large. We also apply our method to two real datasets, the Dallas Heart Studies (DHS) and the Multi-Ethnic Study of Atherosclerosis (MESA), and demonstrate the superiority of Bayestrat over commonly used methods.


Author(s):  
Dr. Vinod Gaikwad ◽  
Prajakta Patil ◽  
Atmaram Pawar ◽  
Kakasaheb Mahadik

Bioequivalence (BE) is established between the brand drug and the generic drug to allow the linking of preclinical and clinical testing conducted on the reference listed drug. Regulatory agencies around the globe have come up with the guidance for locally acting orally inhaled drug products (OIDPs) for bioequivalence approaches. The prime intent of the present article is to compare approaches of different international regulatory authorities such as Health Canada, European Medicines Agency and the US Food and Drug Administration that have published guidance related to locally acting OIDPs. Moreover, the Central Drugs Standard Control Organisation, India, has published guidelines for bioavailability and bioequivalence studies. BE recommendations from global regulatory agencies were based on comparison for different parameters, namely inhaler device, formulation, reference product’s selection, in-vitro as well as in-vivo studies (pharmacokinetics, pharmacodynamics, and clinical studies). In the case of in-vivo studies, details about study design, dose choices, inclusion/ exclusion criteria of the subject, study period, endpoint study, and equivalence acceptance criteria were discussed in the present review article.


Sign in / Sign up

Export Citation Format

Share Document