scholarly journals Methodological approaches for analysing data from therapeutic efficacy studies

2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Solange Whegang Youdom ◽  
Leonardo K. Basco

AbstractSeveral anti-malarial drugs have been evaluated in randomized clinical trials to treat acute uncomplicated Plasmodium falciparum malaria. The outcome of anti-malarial drug efficacy studies is classified into one of four possible outcomes defined by the World Health Organization: adequate clinical and parasitological response, late parasitological failure, late clinical failure, early treatment failure. These four ordered categories are ordinal data, which are reduced to either a binary outcome (i.e., treatment success and treatment failure) to calculate the proportions of treatment failure or to time-to-event outcome for Kaplan–Meier survival analysis. The arbitrary transition from 4-level ordered categories to 2-level type categories results in a loss of statistical power. In the opinion of the authors, this outcome can be considered as ordinal at a fixed endpoint or at longitudinal endpoints. Alternative statistical methods can be applied to 4-level ordinal categories of therapeutic response to optimize data exploitation. Furthermore, network meta-analysis is useful not only for direct comparison of drugs which were evaluated together in a randomized design, but also for indirect comparison of different artemisinin-based combinations across different clinical studies using a common drug comparator, with the aim to determine the ranking order of drug efficacy. Previous works conducted in Cameroonian children served as data source to illustrate the feasibility of these novel statistical approaches. Data analysis based on ordinal end-point may be helpful to gain further insight into anti-malarial drug efficacy.

PEDIATRICS ◽  
1994 ◽  
Vol 93 (1) ◽  
pp. 17-27
Author(s):  
Kenneth H. Brown ◽  
Janet M. Peerson ◽  
Olivier Fontaine

Objective. To assess the effects of continued feeding of nonhuman milks or formulas to young children during acute diarrhea on their treatment failure rates, stool frequency and amount, diarrheal duration, and change in body weight. Methods. A total of 29 randomized clinical trials of 2215 patients were identified by computerized bibliographic search and review of published articles. Data were abstracted and analyzed using standard meta-analytic procedures. Results. Among studies that compared lactose-containing milk or formula diets with lactose-free regimens, those children who received the lactose-containing diets during acute diarrhea were twice as likely to have a treatment failure as those who received a lactose-free diet (22% vs 12%, respectively; P < .001). However, the excess treatment failure rates occurred only in those studies that included patients whose initial degree of dehydration, as reported by authors, was severe, or that were conducted before 1985, when appropriate diarrhea treatment protocols were first widely accepted. Among studies of patients with mild diarrhea, all but one of which were completed after 1985, the overall treatment failure rates in the lactose groups were similar to the rates in the lactose-free groups (13% vs 15%). These results suggest that children with mild or no dehydration and those who are managed according to appropriate treatment protocols, such as that promoted by the World Health Organization, can be treated as successfully with lactose-containing diets as with lactose-free ones. The pooled information from studies that compared undiluted lactose-containing milks with the same milks offered at reduced concentration concluded that (1) children who received undiluted milks were marginally more likely to experience treatment failure than those who received diluted milk (16% vs 12%, P = .05), (2) the differences in stool output were small and of limited clinical importance, and (3) children who received the undiluted milk diets gained 0.25 SD more weight than those who received the diluted ones (P = .004). In addition, as with the previous set of studies, there were no differences in the pooled treatment failure rates between the respective groups in those studies of mildly dehydrated patients conducted after 1985 (14% vs 12%). Conclusions. The vast majority of young children with acute diarrhea can be successfully managed with continued feeding of undiluted nonhuman milks. Routine dilution of milk and routine use of lactose-free milk formula are therefore not necessary, especially when oral rehydration therapy and early feeding (in addition to milk) form the basic approach to the clinical management of diarrhea in infants and children.


2003 ◽  
Vol 47 (1) ◽  
pp. 231-237 ◽  
Author(s):  
Agnès Aubouy ◽  
Mohamed Bakary ◽  
Annick Keundjian ◽  
Bernard Mbomat ◽  
Jean Ruffin Makita ◽  
...  

ABSTRACT Many African countries currently use a sulfadoxine-pyrimethamine combination (SP) or amodiaquine (AQ) to treat uncomplicated Plasmodium falciparum malaria. Both drugs represent the last inexpensive alternatives to chloroquine. However, resistant P. falciparum populations are largely reported in Africa, and it is compulsory to know the present situation of resistance. The in vivo World Health Organization standard 28-day test was used to assess the efficacy of AQ and SP to treat uncomplicated falciparum malaria in Gabonese children under 10 years of age. To document treatment failures, molecular genotyping to distinguish therapeutic failures from reinfections and drug dosages were undertaken. A total of 118 and 114 children were given AQ or SP, respectively, and were monitored. SP was more effective than AQ, with 14.0 and 34.7% of therapeutic failures, respectively. Three days after initiation of treatment, the mean level of monodesethylamodiaquine (MdAQ) in plasma was 149 ng/ml in children treated with amodiaquine. In those treated with SP, mean levels of sulfadoxine and pyrimethamine in plasma were 100 μg/ml and 212 ng/ml, respectively. Levels of the three drugs were higher in patients successfully treated with AQ (MdAQ plasma levels) or SP (sulfadoxine and pyrimethamine plasma levels). Blood concentration higher than breakpoints of 135 ng/ml for MdAQ, 100 μg/ml for sulfadoxine, and 175 ng/ml for pyrimethamine were associated with treatment success (odds ratio: 4.5, 9.8, and 11.8, respectively; all P values were <0.009). Genotyping of merozoite surface proteins 1 and 2 demonstrated a mean of 4.0 genotypes per person before treatment. At reappearance of parasitemia, both recrudescent parasites (represented by common bands in both samples) and newly inoculated parasites (represented by bands that were absent before treatment) were present in the blood of most (51.1%) children. Only 3 (6.4%) therapeutic failures were the result not of treatment inefficacy but of new infection. In areas where levels of drug resistance and complexity of infections are high, drug dosage and parasite genotyping may be of limited interest in improving the precision of drug efficacy measurement. Their use should be weighted according to logistical constraints.


2017 ◽  
Vol 1 (3) ◽  
pp. 198-204 ◽  
Author(s):  
S. N. Ghaemi ◽  
Harry P. Selker

IntroductionAlthough classical randomized clinical trials (RCTs) are the gold standard for proof of drug efficacy, randomized discontinuation trials (RDTs), sometimes called “enriched” trials, are used increasingly, especially in psychiatric maintenance studies.MethodsA narrative review of two decades of experience with RDTs.ResultsRDTs in psychiatric maintenance trials tend to use a dependent variable as a predictor: treatment response. Treatment responders are assessed for treatment response. This tautology in the logic of RDTs renders them invalid, since the predictor and the outcome are the same variable. Although RDTs can be designed to avoid this tautologous state of affairs, like using independent predictors of outcomes, such is not the case with psychiatric maintenance studiesFurther, purported benefits of RDTs regarding feasibility were found to be questionable. Specifically, RDTs do not enhance statistical power in many settings, and, because of high dropout rates, produce results of questionable validity. Any claimed benefits come with notably reduced generalizability.ConclusionsRDTs appear to be scientifically invalid as used in psychiatric maintenance designs. Their purported feasibility benefits are not seen in actual trials for psychotropic drugs. There is warrant for changes in federal policy regarding marketing indications for maintenance efficacy using the RDT design.


2019 ◽  
Author(s):  
Yitagesu Habtu ◽  
Tesema Bereku ◽  
Girma Alemu ◽  
Ermias Abera

BACKGROUND Ethiopia is one of among thirty high burden countries of multi-drug resistant tuberculosis (MDR-TB) in the regions of world health organization. Contextual evidence on the emergence of the disease is limited at a program level. OBJECTIVE The aim of the study is to explore patient-provider factors that may facilitate the emergence of multi-drug resistant tuberculosis. METHODS We used a phenomenological study design of qualitative approach from June to July, 2015. We conducted ten in-depth interviews and 4 focus group discussions with purposely selected patients and providers. We designed and used an interview guide to collect data. Verbatim transcribes were exported to open code 3.4 for emerging thematic analysis. Domain summaries were used to support core interpretation. RESULTS The study explored patient-provider factors facilitating the emergence of multi-drug resistant tuberculosis. These factors as underlying, health system and patient-related factors. Especially, the a shows conflicting finding between having a history of discontinuing drug-susceptible tuberculosis and emergence of multi-drug resistant tuberculosis. CONCLUSIONS The patient-provider factors may result in poor early case identification, adherence to and treatment success in drug sensitive or multi-drug resistant tuberculosis. Our study implies the need for awareness creation about multi-drug resistant tuberculosis for patients and further familiarization for providers. This study also shows that patients developed multi-drug resistant tuberculosis though they had never discontinued their drug-susceptible tuberculosis treatment. Therefore, further studies may require for this discording finding.


2014 ◽  
Vol 58 (10) ◽  
pp. 5643-5649 ◽  
Author(s):  
Katherine Kay ◽  
Eva Maria Hodel ◽  
Ian M. Hastings

ABSTRACTIt is now World Health Organization (WHO) policy that drug concentrations on day 7 be measured as part of routine assessment in antimalarial drug efficacy trials. The rationale is that this single pharmacological measure serves as a simple and practical predictor of treatment outcome for antimalarial drugs with long half-lives. Herein we review theoretical data and field studies and conclude that the day 7 drug concentration (d7c) actually appears to be a poor predictor of therapeutic outcome. This poor predictive capability combined with the fact that many routine antimalarial trials will have few or no failures means that there appears to be little justification for this WHO recommendation. Pharmacological studies have a huge potential to improve antimalarial dosing, and we propose study designs that use more-focused, sophisticated, and cost-effective ways of generating these data than the mass collection of single d7c concentrations.


2021 ◽  
Author(s):  
Rohan Sakhardande ◽  
Deepak Devegowda

Abstract The analyses of parent-child well performance is a complex problem depending on the interplay between timing, completion design, formation properties, direct frac-hits and well spacing. Assessing the impact of well spacing on parent or child well performance is therefore challenging. A naïve approach that is purely observational does not control for completion design or formation properties and can compromise well spacing decisions and economics and perhaps, lead to non-intuitive results. By using concepts from causal inference in randomized clinical trials, we quantify the impact of well spacing decisions on parent and child well performance. The fundamental concept behind causal inference is that causality facilitates prediction; but being able to predict does not imply causality because of association between the variables. In this study, we work with a large dataset of over 3000 wells in a large oil-bearing province in Texas. The dataset includes several covariates such as completion design (proppant/fluid volumes, frac-stages, lateral length, cluster spacing, clusters/stage and others) and formation properties (mechanical and petrophysical properties) as well as downhole location. We evaluate the impact of well spacing on 6-month and 1-year cumulative oil in four groups associated with different ranges of parent-child spacing. By assessing the statistical balance between the covariates for both parent and child well groups (controlling for completion and formation properties), we estimate the causal impact of well spacing on parent and child well performance. We compare our analyses with the routine naïve approach that gives non-intuitive results. In each of the four groups associated with different ranges of parent-child well spacing, the causal workflow quantifies the production loss associated with the parent and child well. This degradation in performance is seen to decrease with increasing well spacing and we provide an optimal well spacing value for this specific multi-bench unconventional play that has been validated in the field. The naïve analyses based on simply assessing association or correlation, on the contrary, shows increasing child well degradation for increasing well spacing, which is simply not supported by the data. The routinely applied correlative analyses between the outcome (cumulative oil) and predictors (well spacing) fails simply because it does not control for variations in completion design over the years, nor does it account for variations in the formation properties. To our knowledge, there is no other paper in petroleum engineering literature that speaks of causal inference. This is a fundamental precept in medicine to assess drug efficacy by controlling for age, sex, habits and other covariates. The same workflow can easily be generalized to assess well spacing decisions and parent-child well performance across multi-generational completion designs and spatially variant formation properties.


2021 ◽  
Author(s):  
Kuo-Ti Peng ◽  
Tsung-Yu Huang ◽  
Jiun-Liang Chen ◽  
Chiang-Wen Lee ◽  
Hsin-Nung Shih

Abstract Background: Total hip arthroplasty (THA) is a widely used and successfully performed orthopedic procedure for treating severe hip osteoarthritis, rheumatoid arthritis, and avascular necrosis. However, periprosthetic joint infection (PJI) after THA is a devastating complication for patients and orthopedic surgeons. Although surgical technology has been advanced and antibiotic-loaded cemented spacers or beads have developed, the treatment failure rate of one- or two-stage exchange arthroplasty for PJI is reported to be high, with >10% rate of incidence. Therefore, determining the possible pathogenesis and increasing the treatment success rate is a clinically important and urgent issue. Methods: A total of 256 patients with PJI who had undergone THA from 2005 to 2015 were included in this retrospective review. Seven patients required combined ilioinguinal and anterlateoal approach for debridement of iliac fossa abscess and infected hip lesion, included five patients with intraoperative pus leakage from the acetabular inner wall and the other two patients who underwent pre-operative pelvic computed tomography (CT) because of repeat PJI treatment failure. All available data from the medical records from all patients were retrospectively analyzed.Results: Of the 256 patients, seven (3.1%) patients was combined iliac fossa abscess in our cohort. For the microbiologic analysis, a total of thirteen pathogens were isolated from seven recurrent PJI patients with iliac fossa abscess, and Staphylococus aureus was the most commomly pathogen (4 out of 7 cases). The serum white blood cell (WBC) count was decreased significantly two weeks after debridement with combined the ilioinguinal and anterolateral approach compared to the day before surgery (11840/μL vs. 7370/μL; p<0.01), and level of C-reactive protein (CRP) was decreased at postoperative one week (107 mg/dL vs. 47.31 mg/dL; p=0.03). Furthermore, no recurrent infection was found in six revision THA patient in a follow up of 7.1 year. Conclusion: This result suggests that pre-operative pelvic CT and cautious identification of uncertain pus leakage from the inner wall of the acetabulum is essential for the diagnosis of recurrent PJI. Radical debridement with combined ilioinguinal and anterlateoal approach may aviod treatment failure in recurrent PJI with iliac fossa abscess.


2019 ◽  
Vol 40 (26) ◽  
pp. 2155-2163 ◽  
Author(s):  
Filippos Triposkiadis ◽  
Javed Butler ◽  
Francois M Abboud ◽  
Paul W Armstrong ◽  
Stamatis Adamopoulos ◽  
...  

Abstract Randomized clinical trials initially used heart failure (HF) patients with low left ventricular ejection fraction (LVEF) to select study populations with high risk to enhance statistical power. However, this use of LVEF in clinical trials has led to oversimplification of the scientific view of a complex syndrome. Descriptive terms such as ‘HFrEF’ (HF with reduced LVEF), ‘HFpEF’ (HF with preserved LVEF), and more recently ‘HFmrEF’ (HF with mid-range LVEF), assigned on arbitrary LVEF cut-off points, have gradually arisen as separate diseases, implying distinct pathophysiologies. In this article, based on pathophysiological reasoning, we challenge the paradigm of classifying HF according to LVEF. Instead, we propose that HF is a heterogeneous syndrome in which disease progression is associated with a dynamic evolution of functional and structural changes leading to unique disease trajectories creating a spectrum of phenotypes with overlapping and distinct characteristics. Moreover, we argue that by recognizing the spectral nature of the disease a novel stratification will arise from new technologies and scientific insights that will shape the design of future trials based on deeper understanding beyond the LVEF construct alone.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S519-S519
Author(s):  
Christopher D Pearson ◽  
Dorothy Holzum ◽  
Ryan P Moenster ◽  
Travis W Linneman

Abstract Background Erythrocyte sedimentation rate (ESR) is monitored during therapy for osteomyelitis (OM) but the degree of reduction associated with treatment success remains unclear. Methods This retrospective cohort study evaluated patients treated for at least 2 weeks with intravenous (IV) antibiotics for OM through the VA St. Louis HCS from 1 January 2010 to 1 January 2018 with at least 2 ESR values during their therapy. Patients were excluded if they had comorbidities that could cause elevations in ESR. The primary outcome was the rate of treatment failure in patients achieving ≥50% decrease in ESR from baseline compared with those without a 50% decrease. Treatment failure was defined as a need for unplanned surgical intervention or re-initiation of antibiotic therapy for OM of the same anatomical site within 6-months after initial therapy was discontinued. The presence of diabetes, peripheral vascular disease (PVD), age >70, baseline creatinine clearance (CrCl) < 50 mL/minute, surgical intervention as part of initial therapy, and ESR reduction ≥50% from baseline were included in a univariate analysis with variables with a P < 0.2 included in a multivariate logistic regression model. Results A total of 143 patients were included; 74 patients with a ≥50% decrease in ESR and 69 patients with a decrease <50%. Mean initial ESRs were not different between groups (79.5±31 vs. 79.9 ± 32 mm/hour, P = 0.95), but end-of-treatment values were significantly higher in the <50% reduction group vs. ≥50% (20.6 ± 14 vs. 72.4 ± 42 mm/hour, P < 0.05, respectively). There were no baseline differences between groups in regards to age, rates of diabetes, PVD, CrCl < 50 mL/minute, initial surgical therapy management, or definitive vs. empiric therapy. Thirty percent (22/74) of patients with a ≥50% reduction in ESR failed treatment vs. 55% (38/69) in patients with a <50% reduction (P < 0.01). Only ESR reduction of ≥50% met criteria for inclusion in the multivariate regression model and was associated with a 65.5% relative risk reduction in treatment failure (OR 0.345; 95% CI 0.173–0.687; P = 0.002). Conclusion Achieving an ESR reduction of ≥50% from baseline during treatment for OM was independently associated with a significant reduction in risk of treatment failure. Disclosures All authors: No reported disclosures.


Sign in / Sign up

Export Citation Format

Share Document