scholarly journals Effects of dose titration on adherence and treatment duration of pregabalin among patients with neuropathic pain: A MarketScan database study

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0242467
Author(s):  
Yu-Chen Yeh ◽  
Joseph C. Cappelleri ◽  
Xiaocong L. Marston ◽  
Ahmed Shelbaya

Objective To examine pregabalin dose titration and its impact on treatment adherence and duration in patients with neuropathic pain (NeP). Methods MarketScan database (2009–2014) was used to extract a cohort of incident adult pregabalin users with NeP who had at least 12 months of follow-up data. Any dose augmentation within 45 days following the first pregabalin claim was defined as dose titration. Adherence (measured by medication possession ratio/MPR) and persistence (measured as the duration of continuous treatment) were compared between the cohorts with and without dose titration. Logistic regressions and Cox proportional hazards models were used to identify the factors associated with adherence (MPR ≥ 0.8) and predictors of time to discontinuation. Results Among the 5,186 patients in the analysis, only 18% of patients had dose titration. Patients who had dose titration were approximately 2.6 times as likely to be adherent (MPR ≥ 0.8) (odds ratio = 2.59, P < 0.001) than those who did not have dose titration. Kaplan-Meier analysis shows that the time to discontinuation or switch was significantly longer among patients who had dose titration (4.99 vs. 4.04 months, P = 0.009). Conclusions Dose titration was associated with improved treatment adherence and persistence among NeP patients receiving pregabalin. The findings will provide valuable evidence to increase physician awareness of dose recommendations in the prescribing information and to educate patients on the importance of titration and adherence.

2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Kochav ◽  
R.C Chen ◽  
J.M.D Dizon ◽  
J.A.R Reiffel

Abstract Background Theoretical concern exists regarding AV block (AVB) with class I antiarrhythmics (AADs) when bundle branch block (BBB) is present. Whether this is substantiated in real-world populations is unknown. Purpose To determine the relationship between type of AAD and incidence of AVB in patients with preexisting BBB. Methods We retrospectively studied all patients with BBB who received class I and III AADs between 1997–2019 to compare incidence of AVB. We defined index time as first exposure to either drug class and excluded patients with prior AVB or exposed to both classes. Time-at-risk window ended at first outcome occurrence or when patients were no longer observed in the database. We estimated hazard ratios for incident AVB using Cox proportional hazards models with propensity score stratification, adjusting for over 32,000 covariates from the electronic health record. Kaplan-Meier methods were used to determine treatment effects over time. Results Of 40,120 individuals with BBB, 148 were exposed to a class I AAD and 2401 to a class III AAD. Over nearly 4,200 person-years of follow up, there were 22 and 620 outcome events in the class I and class III cohorts, respectively (Figure). In adjusted analyses, AVB risk was markedly lower in patients exposed to class I AADs compared with class III (HR 0.48 [95% CI 0.30–0.75]). Conclusion Among patients with BBB, exposure to class III AADs was strongly associated with greater risk of incident AVB. This likely reflects differences in natural history of patients receiving class I vs class III AADs rather than adverse class III effects, however, the lack of worse outcomes acutely with class I AADs suggests that they may be safer in BBB than suspected. Funding Acknowledgement Type of funding source: None


2017 ◽  
Vol 117 (06) ◽  
pp. 1072-1082 ◽  
Author(s):  
Xiaoyan Li ◽  
Steve Deitelzweig ◽  
Allison Keshishian ◽  
Melissa Hamilton ◽  
Ruslan Horblyuk ◽  
...  

SummaryThe ARISTOTLE trial showed a risk reduction of stroke/systemic embolism (SE) and major bleeding in non-valvular atrial fibrillation (NVAF) patients treated with apixaban compared to warfarin. This retrospective study used four large US claims databases (MarketScan, PharMetrics, Optum, and Humana) of NVAF patients newly initiating apixaban or warfarin from January 1, 2013 to September 30, 2015. After 1:1 warfarin-apixaban propensity score matching (PSM) within each database, the resulting patient records were pooled. Kaplan-Meier curves and Cox proportional hazards models were used to estimate the cumulative incidence and hazard ratios (HRs) of stroke/SE and major bleeding (identified using the first listed diagnosis of inpatient claims) within one year of therapy initiation. The study included a total of 76,940 (38,470 warfarin and 38,470 apixaban) patients. Among the 38,470 matched pairs, 14,563 were from MarketScan, 7,683 were from PharMetrics, 7,894 were from Optum, and 8,330 were from Humana. Baseline characteristics were balanced between the two cohorts with a mean (standard deviation [SD]) age of 71 (12) years and a mean (SD) CHA2DS2-VASc score of 3.2 (1.7). Apixaban initiators had a significantly lower risk of stroke/SE (HR: 0.67, 95 % CI: 0.59–0.76) and major bleeding (HR: 0.60, 95 % CI: 0.54–0.65) than warfarin initiators. Different types of stroke/SE and major bleeding – including ischaemic stroke, haemorrhagic stroke, SE, intracranial haemorrhage, gastrointestinal bleeding, and other major bleeding – were all significantly lower for apixaban compared to warfarin treatment. Subgroup analyses (apixaban dosage, age strata, CHA2DS2-VASc or HAS-BLED score strata, or dataset source) all show consistently lower risks of stroke/SE and major bleeding associated with apixaban as compared to warfarin treatment. This is the largest “real-world” study on apixaban effectiveness and safety to date, showing that apixaban initiation was associated with significant risk reductions in stroke/SE and major bleeding compared to warfarin initiation after PSM. These benefits were consistent across various high-risk subgroups and both the standard-and low-dose apixaban dose regimens.Note: The review process for this manuscript was fully handled by Christian Weber, Editor in Chief.Supplementary Material to this article is available online at www.thrombosis-online.com.


2006 ◽  
Vol 24 (18_suppl) ◽  
pp. 560-560 ◽  
Author(s):  
D. A. Patt ◽  
Z. Duan ◽  
G. Hortobagyi ◽  
S. H. Giordano

560 Background: Adjuvant chemotherapy for breast cancer is associated with the development of secondary AML, but this risk in an older population has not been previously quantified. Methods: We queried data from the Surveillance, Epidemiology, and End Results-Medicare (SEER-Medicare) database for women who were diagnosed with nonmetastatic breast cancer from 1992–1999. We compared the risk of AML in patients with and without adjuvant chemotherapy (C), and by differing C regimens. The primary endpoint was a claim with an inpatient or outpatient diagnosis of AML (ICD-09 codes 205–208). Risk of AML was estimated using the method of Kaplan-Meier. Cox proportional hazards models were used to determine factors independently associated with AML. Results: 36,904 patients were included in this observational study, 4,572 who had received adjuvant C and 32,332 who had not. The median patient age was 75.3 (66.0–103.3). The median follow up was 63 months (13–132). Patients who received C were significantly younger, had more advanced stage disease, and had lower comorbidity scores (p<0.001). The unadjusted risk of developing AML at 10 years after any adjuvant C for breast cancer was 1.6% versus 1.1% for women who had not received C. The adjusted HR for AML with adjuvant C was 1.72 (1.16–2.54) compared to women who did not receive C. HR for radiation was 1.21 (0.86–1.70). HR was higher with increasing age but p>0.05. An analysis was performed among women who received C. When compared to other C regimens, anthracycline-based therapy (A) conveyed a significantly higher hazard for AML HR 2.17 (1.08–4.38), while patients who received A plus taxanes (T) did not have a significant increase in risk HR1.29 (0.44–3.82) nor did patients who received T with some other C HR 1.50 (0.34–6.67). Another significant independent predictor of AML included GCSF use HR 2.21 (1.14–4.25). In addition, increasing A dose was associated with higher risk of AML (p<0.05). Conclusions: There is a small but real increase in AML after adjuvant chemotherapy for breast cancer in older women. The risk appears to be highest from A-based regimens, most of which also contained cyclophosphamide, and may be dose-dependent. T do not appear to increase risk. The role of GCSF should be further explored. No significant financial relationships to disclose.


2015 ◽  
Vol 33 (3_suppl) ◽  
pp. 453-453
Author(s):  
Kelly Elizabeth Orwat ◽  
Samuel Lewis Cooper ◽  
Michael Ashenafi ◽  
M. Bret Anderson ◽  
Marcelo Guimaraes ◽  
...  

453 Background: Systemic therapies for unresectable liver malignancies may provide a survival benefit, but eventually prove intolerable or ineffective. TARE provides an additional liver-directed treatment option to improve local control for these patients, but there is limited data on patient factors associated with survival. Methods: All patients that received TARE at the Medical University of South Carolina from March 2006 through May of 2014 were included in this analysis of overall survival (OS) and toxicity. Kaplan-Meier estimates of OS from date of first procedure are reported. Potential prognostic factors for OS were evaluated using log rank tests and Cox proportional hazards models. Results: In 114 patients that received TARE at our institution, median follow-up was 6.4 months [range 0-86], with the following tumor histology: colorectal (CR) n=55, hepatocellular (HC) n=20, cholangiocarcinoma (CC) n=16, neuroendocrine (NE) n=12, breast (BR) n=6, other n=5. At least 1 line of systemic therapy prior to TARE was noted in 79% of patients. Median OS was 6.6 months and 1-year OS was 30.7%. The percentage of patients who died within 3 months of TARE were 46.2% for patients with albumin < 3 but only 20.3% for patients with albumin ≥ 3. Grade ≥ 2 toxicity was observed in 22 patients (19.3%) including 9 (7.9%) with Grade 3 and 1 (0.9%) with Grade 4 toxicity. A single patient with a pre-existing pulmonary arteriovenous malformation experienced Grade 3 pneumonitis that resolved with steroids. No deaths were attributed to radiation-induced liver disease. Conclusions: TARE is a relatively safe and effective treatment for unresectable intrahepatic malignancies. Patients with NE or BR histology as well as those with better hepatic synthetic function were associated with significantly better survival. Our data suggest that patients with albumin below 3 may not benefit from TARE. [Table: see text]


2018 ◽  
Vol 36 (4_suppl) ◽  
pp. 666-666
Author(s):  
Anuj K. Patel ◽  
Mei Sheng Duh ◽  
Victoria Barghout ◽  
Mihran Ara Yenikomshian ◽  
Yongling Xiao ◽  
...  

666 Background: FTD/TPI and REG both prolong survival in refractory mCRC and have similar indications with different side effect profiles. This study compares real-world treatment patterns with FTD/TPI and REG for mCRC in a large, representative US claims database. Methods: Retrospective data from 10/2014 to 7/2016 from the US Symphony Health Solutions’ Integrated Dataverse (IDV®) database were analyzed for patients receiving FTD/TPI or REG. The index date was the date of first FTD/TPI or REG prescription. Patients were included if: 1) age ≥18 years old, 2) ≥1 CRC diagnosis, 3) no diagnosis of gastric cancer or gastrointestinal stromal tumor, and 4) continuous clinical activity for ≥3 months before and after index date. The observation period spanned from index date to end of data, end of continuous clinical activity, or switch to another mCRC treatment. Adherence was assessed using medication possession ratio (MPR) ≥0.80 and proportion of days covered (PDC) ≥0.80 at 3 months. Compliance was assessed using time to discontinuation over the observation period using allowable gaps of 45, 60, or 90 days. Patients who never discontinued therapy were censored at the end of the observation period. Outcomes were compared between FTD/TPI and REG using multivariate logistic regression and Cox proportional hazards models, adjusting for demographic and clinical baseline characteristics. Results: A total of 1,630 FTD/TPI patients and 1,425 REG patients were identified. Mean ± standard deviation (SD) age of FTD/TPI patients was 61.0 ± 11.0 compared to 62.8 ± 10.9 for REG patients (p < 0.001). FTD/TPI patients were 80% more likely to have a MPR ≥0.80 compared to those on REG (Odds Ratio [OR] = 1.80, p < 0.001) and more than twice as likely to have a PDC ≥0.80 (OR = 2.66, p < 0.001) at 3 months. FTD/TPI patients were 37% less likely to discontinue their treatment compared to those on REG when using gaps of 60 days (Hazard Ratio = 0.63, p < 0.001). Similar results were found with 45 and 90 days. Conclusions: In this retrospective study of mCRC patients, patients on FTD/TPI were significantly more likely to adhere and comply with therapy compared to those on REG.


2021 ◽  
Vol 8 ◽  
Author(s):  
Bin Zhou ◽  
Xuerong Sun ◽  
Na Yu ◽  
Shuang Zhao ◽  
Keping Chen ◽  
...  

Background: The results of studies on the obesity paradox in all-cause mortality are inconsistent in patients equipped with an implantable cardioverter-defibrillator (ICD). There is a lack of relevant studies on Chinese populations with large sample size. This study aimed to investigate whether the obesity paradox in all-cause mortality is present among the Chinese population with an ICD.Methods: We conducted a retrospective analysis of multicenter data from the Study of Home Monitoring System Safety and Efficacy in Cardiac Implantable Electronic Device–implanted Patients (SUMMIT) registry in China. The outcome was all-cause mortality. The Kaplan–Meier curves, Cox proportional hazards models, and smooth curve fitting were used to investigate the association between body mass index (BMI) and all-cause mortality.Results: After inclusion and exclusion criteria, 970 patients with an ICD were enrolled. After a median follow-up of 5 years (interquartile, 4.1–6.0 years), in 213 (22.0%) patients occurred all-cause mortality. According to the Kaplan–Meier curves and multivariate Cox proportional hazards models, BMI had no significant impact on all-cause mortality, whether as a continuous variable or a categorical variable classified by various BMI categorization criteria. The fully adjusted smoothed curve fit showed a linear relationship between BMI and all-cause mortality (p-value of 0.14 for the non-linearity test), with the curve showing no statistically significant association between BMI and all-cause mortality [per 1 kg/m2 increase in BMI, hazard ratio (HR) 0.97, 95% CI 0.93–1.02, p = 0.2644].Conclusions: The obesity paradox in all-cause mortality was absent in the Chinese patients with an ICD. Prospective studies are needed to further explore this phenomenon.


2021 ◽  
pp. 1-14 ◽  
Author(s):  
Olga Mitelman ◽  
Hoda Z. Abdel-Hamid ◽  
Barry J. Byrne ◽  
Anne M. Connolly ◽  
Peter Heydemann ◽  
...  

Background: Studies 4658-201/202 (201/202) evaluated treatment effects of eteplirsen over 4 years in patients with Duchenne muscular dystrophy and confirmed exon-51 amenable genetic mutations. Chart review Study 4658-405 (405) further followed these patients while receiving eteplirsen during usual clinical care. Objective: To compare long-term clinical outcomes of eteplirsen-treated patients from Studies 201/202/405 with those of external controls. Methods: Median total follow-up time was approximately 6 years of eteplirsen treatment. Outcomes included loss of ambulation (LOA) and percent-predicted forced vital capacity (FVC%p). Time to LOA was compared between eteplirsen-treated patients and standard of care (SOC) external controls and was measured from eteplirsen initiation in 201/202 or, in the SOC group, from the first study visit. Comparisons were conducted using univariate Kaplan-Meier analyses and log-rank tests, and multivariate Cox proportional hazards models with regression adjustment for baseline characteristics. Annual change in FVC%p was compared between eteplirsen-treated patients and natural history study patients using linear mixed models with repeated measures. Results: Data were included from all 12 patients in Studies 201/202 and the 10 patients with available data from 405. Median age at LOA was 15.16 years. Eteplirsen-treated patients experienced a statistically significant longer median time to LOA by 2.09 years (5.09 vs. 3.00 years, p < 0.01) and significantly attenuated rates of pulmonary decline vs. natural history patients (FVC%p change: –3.3 vs. –6.0 percentage points annually, p < 0.0001). Conclusions: Study 405 highlights the functional benefits of eteplirsen on ambulatory and pulmonary function outcomes up to 7 years of follow-up in comparison to external controls.


2021 ◽  
Author(s):  
Elke Wynberg ◽  
Hugo van Willigen ◽  
Maartje Dijkstra ◽  
Anders Boyd ◽  
Neeltje A. Kootstra ◽  
...  

Background: Few longitudinal data on COVID-19 symptoms across the full spectrum of disease severity are available. We evaluated symptom onset, severity and recovery up to nine months after illness onset. Methods: The RECoVERED Study is a prospective cohort study based in Amsterdam, the Netherlands. Participants aged>18 years were recruited following SARS-CoV-2 diagnosis via the local Public Health Service and from hospitals. Standardised symptom questionnaires were completed at recruitment, at one week and month after recruitment, and monthly thereafter. Clinical severity was defined according to WHO criteria. Kaplan-Meier methods were used to compare time from illness onset to symptom recovery, by clinical severity. We examined determinants of time to recovery using multivariable Cox proportional hazards models. Results: Between 11 May 2020 and 31 January 2021, 301 COVID-19 patients (167[55%] male) were recruited, of whom 99/301(32.9%) had mild, 140/301(46.5%) moderate, 30/301(10.0%) severe and 32/301(10.6%) critical disease. The proportion of participants reporting at least one persistent symptom at 12 weeks after illness onset was greater in those with severe/critical disease (81.7%[95%CI=68.7-89.7%]) compared to those with mild or moderate disease (33.0%[95%CI=23.0-43.3%] and 63.8%[95%CI=54.8-71.5%]). At nine months after illness onset, almost half of all participants (42.1%[95%CI=35.6-48.5]) continued to report ≥1 symptom. Recovery was slower in participants with BMI≥30kg/m2 (HR 0.51[95%CI=0.30-0.87]) compared to those with BMI<25kg/m2, after adjusting for age, sex and number of comorbidities. Conclusions: COVID-19 symptoms persisted for nine months after illness onset, even in those with mild disease. Obesity was the most important determinant of time to recovery from symptoms.


2021 ◽  
Author(s):  
Yuxin Ding ◽  
Runyi Jiang ◽  
Yuhong Chen ◽  
Jing Jing ◽  
Xiaoshuang Yang ◽  
...  

Abstract Background Previous studies have reported poorer survival in head and neck melanoma (HNM) than in body melanoma (BM). Individualized tools to predict the prognosis for patients with HNM or BM remain insufficient. Objectives To compare the characteristics of HNM and BM, and to establish and validate the nomograms for predicting the 3-, 5- and 10-year survival of patients with HNM or BM. Methods We studied patients with HNM or BM from 2004 to 2015 in the Surveillance, Epidemiology, and End Results (SEER) database. The HNM group and BM group were randomly divided into training and validation cohorts. We performed the Kaplan-Meier method for survival analysis, and used multivariate Cox proportional hazards models to identify independent prognostic factors. Nomograms for HNM patients or BM patients were developed via the rms package, and were measured by the concordance index (C-index), the area under the receiver operator characteristic (ROC) curve (AUC) and calibration plots. Results Of 70605 patients acquired, 21% (n = 15071) had HNM and 79% (n = 55534) had BM. The HNM group contained more older patients, male patients, and lentigo maligna melanoma, and more frequently had thicker tumors and metastases than the BM group. The 5-year CSS and OS rates were 88.1 ± 0.3% and 74.4 ± 0.4% in the HNM group and 92.5 ± 0.1% and 85.8 ± 0.2% in the BM group, respectively. Eight independent prognostic factors (age, sex, histology, thickness, ulceration, stage, metastases, and surgery) were identified to construct nomograms for HNM patients or BM patients. The performance of the nomograms were excellent: the C-index of the CSS prediction for HNM patients and BM patients in the training cohort were 0.839 and 0.895, respectively; in the validation cohort, they were 0.848 and 0.888, respectively; the AUCs for the 3-, 5- and 10-year CSS rates of HNM were 0.871, 0.865 and 0.854 (training), and 0.881, 0.879 and 0.861 (validation), respectively; of BM, the AUCs were 0.924, 0.918 and 0.901 (training) and 0.916, 0.908 and 0.893 (validation), respectively; and the calibration plots showed great consistency. Conclusions The characteristics of HNM and BM are heterogeneous, and we constructed and validated specific nomograms as practical prognostic tools for patients with HNM or BM.


2006 ◽  
Vol 24 (18_suppl) ◽  
pp. 510-510 ◽  
Author(s):  
D. F. Hayes ◽  
A. Thor ◽  
L. Dressler ◽  
D. Weaver ◽  
G. Broadwater ◽  
...  

510 Background: CALGB 9344 showed 4 cycles of paclitaxel (T) after 4 cycles of doxorubicin/cyclophosphamide (AC) improved disease-free (DFS) and overall survival (OS) compared to 4 cycles of AC. Higher dose of A had no benefit (Henderson JCO ’03). Prior studies suggest HER2 is associated with benefit from standard vs low dose of C&A (Dressler JCO ’05). We hypothesized that HER2 might predict benefit from higher dose of A or from T, and that HER2 might refine the observed negative interaction of T with estrogen receptor (ER). Methods: 3121 node + women in CALGB 9344 received 4 q3wk cycles of AC (A: 60, 75, or 90 mg/m2) and then 4 cycles of T (175 mg/m2 q3wk) or no T. Blocks were collected from ∼2800 subjects. 2 sets of 750 patients each were randomly selected from these cases: Set 1 to develop hypotheses; Set 2 for validation. Tissue specimens were available from 643 (set1) and 679 (set2) cases (20% & 22% total enrolled in 9344 respectively). HER2 was evaluated by FISH and by IHC (by antibody cb11 and by Herceptest). Statistical analyses used Cox proportional hazards models, including interaction terms, and Kaplan-Meier estimates for comparing 5-yr DFS by treatment group. Results: In Set 1, all 3 assays suggested that T improved DFS for HER2+ but not for HER2-. For this single set the interaction was not statistically significant. There appeared to be an interaction of HER2, T and ER. IHC using cb11 was applied to Set 2, revealing nearly identical results. In the two sets combined (n=1322), the interaction between HER2 and T was statistically significant (p=0.013). The 3-way interaction of HER2, ER and T was hypothesis-generating and not tested statistically. Differences in 5-yr DFS rates (95% CI) for T vs. no T by HER2 and ER (both sets combined) There was no interaction between HER2 and dose of A. Conclusions: These results suggest that the benefit of adding T to AC is greater for HER2+ tumors, even if ER+, while T was of no apparent benefit in the ER+, HER2- group. Further validation is needed from remaining cases in 9344 and from other trials involving T. [Table: see text] [Table: see text]


Sign in / Sign up

Export Citation Format

Share Document