scholarly journals Actual versus recommended storage temperatures of oral anticancer medicines at patients’ homes

2017 ◽  
Vol 25 (2) ◽  
pp. 382-389 ◽  
Author(s):  
ND Vlieland ◽  
BJF van den Bemt ◽  
DA van Riet-Nales ◽  
ML Bouvy ◽  
ACG Egberts ◽  
...  

Background Substantial quantities of unused medicines are returned by patients to the pharmacy each year. Redispensing these medicines would reduce medicinal waste and health care costs. However, it is not known if medicines are stored by patients as recommended in the product label. Inadequate storage may negatively affect the medicine and reduce clinical efficacy whilst increasing the risk for side effects. Objective To investigate the proportion of patients storing oral anticancer medicines according to the temperature instructions in the product label. Methods Consenting adult patients from six Dutch outpatient hospital pharmacies were included in this study if they used an oral anticancer medicine during February 2014 – January 2015. Home storage temperatures were assessed by inclusion of a temperature logger in the original cancer medicines packaging. The primary outcome was the proportion of patients storing oral anticancer medicines as specified in the Summary of Product Characteristics, either by recalculating the observed temperature fluctuations to a single mean kinetic temperature or by following the temperature instructions taking into account a consecutive 24-h tolerance period. Results Ninety (81.1%) of the 111 included patients (47.8% female, mean age 65.2 (SD: 11.1)) returned their temperature loggers to the pharmacy. None of the patients stored oral anticancer medicines at a mean kinetic temperature above 25℃, one patient stored a medicine requiring storage below 25℃ longer than 24 h above 25℃. None of the patients using medicines requiring storage below 30℃ kept their medicine above 30℃ for a consecutive period of 24 h or longer. Conclusion The majority of patients using oral anticancer medicines store their medicines according to the temperature requirements on the product label claim. Based on our results, most oral anticancer medicines will not be negatively affected by temperature conditions at patients’ homes for a maximum of three months and are likely to be suitable for redispensing.

2012 ◽  
Vol 95 (4) ◽  
pp. 1059-1063 ◽  
Author(s):  
Stephen F Tomasino ◽  
Rebecca M Pines ◽  
Gordon Hamilton

Abstract The AOAC Use-Dilution Methods, 955.15 (Staphylococcus aureus) and 964.02 (Pseudomonas aeruginosa), were revised in 2009 to include a standardized procedure to measure the log density of the test microbe and to establish a minimum mean log density value of 6.0 (geometric mean of 1.0 × 106 CFU/carrier) to qualify the test results. This report proposes setting a maximum mean log density value of 7.0 (geometric mean of 1.0 × 107 CFU/carrier) to further standardize the procedure. The minimum value was based on carrier count data collected by four laboratories over an 8-year period (1999–2006). The data have been updated to include an additional 4 years' worth of data (2006–2010) collected by the same laboratories. A total of 512 tests were conducted on products bearing claims against P. aeruginosa and S. aureus with and without an organic soil load (OSL) added to the inoculum (as specified on the product label claim). Six carriers were assayed in each test, for a total of 3072 carriers. Mean log densities for each of the 512 tests were at least 6.0. With the exception of two tests, one for P. aeruginosa without OSL and one for S. aureus with OSL, the mean log densities did not exceed 7.5 (geometric mean of 3.2 × 107 CFU/carrier). Across microbes and OSL treatments, the mean log density (±SEM) was 6.80 (±0.07) per carrier (a geometric mean of 6.32 × 106 CFU/carrier) and acceptable repeatability (0.28) and reproducibility (0.31) SDs were exhibited. A maximum mean log density per carrier of 7.0 is being proposed here as a validity requirement for S. aureus and P. aeruginosa. A modification to the method to allow for dilution of the final test cultures to achieve carrier counts within 6.0–7.0 logs is also being proposed. Establishing a range of 6.0–7.0 logs will help improve the reliability of the method and should allow for more consistent results within and among laboratories.


2019 ◽  
Vol 23 (54) ◽  
pp. 1-54
Author(s):  
Marian Knight ◽  
Virginia Chiocchia ◽  
Christopher Partlett ◽  
Oliver Rivero-Arias ◽  
Xinyang Hua ◽  
...  

Background Sepsis is a leading cause of direct and indirect maternal death in both the UK and globally. All forms of operative delivery are associated with an increased risk of sepsis, and the National Institute for Health and Care Excellence’s guidance recommends the use of prophylactic antibiotics at all caesarean deliveries, based on substantial randomised controlled trial evidence of clinical effectiveness. A Cochrane review, updated in 2017 (Liabsuetrakul T, Choobun T, Peeyananjarassri K, Islam QM. Antibiotic prophylaxis for operative vaginal delivery. Cochrane Database Syst Rev 2017;8:CD004455), identified only one small previous trial of prophylactic antibiotics following operative vaginal birth (forceps or ventouse/vacuum extraction) and, given the small study size and extreme result, suggested that further robust evidence is needed. Objectives To investigate whether or not a single dose of prophylactic antibiotic following operative vaginal birth is clinically effective for preventing confirmed or presumed maternal infection, and to investigate the associated impact on health-care costs. Design A multicentre, randomised, blinded, placebo-controlled trial. Setting Twenty-seven maternity units in the UK. Participants Women who had an operative vaginal birth at ≥ 36 weeks’ gestation, who were not known to be allergic to penicillin or constituents of co-amoxiclav and who had no indication for ongoing antibiotics. Interventions A single dose of intravenous co-amoxiclav (1 g of amoxicillin/200 mg of clavulanic acid) or placebo (sterile saline) allocated through sealed, sequentially numbered, indistinguishable packs. Main outcome measures Primary outcome – confirmed or suspected infection within 6 weeks of giving birth. Secondary outcomes – severe sepsis, perineal wound infection, perineal pain, use of pain relief, hospital bed stay, hospital/general practitioner visits, need for additional perineal care, dyspareunia, ability to sit comfortably to feed the baby, maternal general health, breastfeeding, wound breakdown, occurrence of anaphylaxis and health-care costs. Results Between March 2016 and June 2018, 3427 women were randomised: 1719 to the antibiotic arm and 1708 to the placebo arm. Seven women withdrew, leaving 1715 women in the antibiotic arm and 1705 in the placebo arm for analysis. Primary outcome data were available for 3225 out of 3420 women (94.3%). Women randomised to the antibiotic arm were significantly less likely to have confirmed or suspected infection within 6 weeks of giving birth (180/1619, 11%) than women randomised to the placebo arm (306/1606, 19%) (relative risk 0.58, 95% confidence interval 0.49 to 0.69). Three serious adverse events were reported: one in the placebo arm and two in the antibiotic arm (one was thought to be causally related to the intervention). Limitations The follow-up rate achieved for most secondary outcomes was 76%. Conclusions This trial has shown clear evidence of benefit of a single intravenous dose of prophylactic co-amoxiclav after operative vaginal birth. These results may lead to reconsideration of official policy/guidance. Further analysis of the mechanism of action of this single dose of antibiotic is needed to investigate whether earlier, pre-delivery or repeated administration could be more effective. Until these analyses are completed, there is no indication for administration of more than a single dose of prophylactic antibiotic, or for pre-delivery administration. Trial registration Current Controlled Trials ISRCTN11166984. Funding This project was funded by the National Institute for Health Research Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 54. See the National Institute for Health Research Journals Library website for further project information.


HortScience ◽  
1992 ◽  
Vol 27 (9) ◽  
pp. 989-992 ◽  
Author(s):  
William J. Carpenter ◽  
Joseph F. Boucher

Delphinium seed germination was about equal in light or darkness. Seed total germination percentages were highest and about equal at constant 15 or 20C and alternating (12 h) 10/20C, 15/25C, or 20/30C. The most rapid and uniform germination generally occurred at constant 20C. Storing seeds dry at 2C for 3 weeks before germination reduced the days to 50% of final germination (T50) and between 10% and 90% germination (T90 - T10) but did not increase total germination. The seeds had only limited desiccation tolerance, with `Magic Fountains Lavender' having declining germination percentages at moisture contents below 7.0% and `Magic Fountains Lilac' below 6.7%. Seeds tolerated storage at low, nonfreezing or subzero temperatures, but cultivar responses differed. `Magic Fountains Lavendar' had progressively lower germination percentages as storage temperatures declined from 5 to –20C, while `Magic Fountains Lilac' germination increased. The relative humidity (RH) and temperature that delphinium seed received during long-term storage influenced germination. Germination after seed storage at 5C was higher, earlier, and more uniform than after 15 or 25C storage. The highest total germination percentages occurred following seed storage at 5C and 30% to 50% RH, the shortest T50 from 35% to 55% RH, and shortest spans (T90 - T10) from 25% to 50% RH.


10.2196/13147 ◽  
2019 ◽  
Vol 21 (7) ◽  
pp. e13147 ◽  
Author(s):  
Alistair Connell ◽  
Rosalind Raine ◽  
Peter Martin ◽  
Estela Capelas Barbosa ◽  
Stephen Morris ◽  
...  

Background The development of acute kidney injury (AKI) in hospitalized patients is associated with adverse outcomes and increased health care costs. Simple automated e-alerts indicating its presence do not appear to improve outcomes, perhaps because of a lack of explicitly defined integration with a clinical response. Objective We sought to test this hypothesis by evaluating the impact of a digitally enabled intervention on clinical outcomes and health care costs associated with AKI in hospitalized patients. Methods We developed a care pathway comprising automated AKI detection, mobile clinician notification, in-app triage, and a protocolized specialist clinical response. We evaluated its impact by comparing data from pre- and postimplementation phases (May 2016 to January 2017 and May to September 2017, respectively) at the intervention site and another site not receiving the intervention. Clinical outcomes were analyzed using segmented regression analysis. The primary outcome was recovery of renal function to ≤120% of baseline by hospital discharge. Secondary clinical outcomes were mortality within 30 days of alert, progression of AKI stage, transfer to renal/intensive care units, hospital re-admission within 30 days of discharge, dependence on renal replacement therapy 30 days after discharge, and hospital-wide cardiac arrest rate. Time taken for specialist review of AKI alerts was measured. Impact on health care costs as defined by Patient-Level Information and Costing System data was evaluated using difference-in-differences (DID) analysis. Results The median time to AKI alert review by a specialist was 14.0 min (interquartile range 1.0-60.0 min). There was no impact on the primary outcome (estimated odds ratio [OR] 1.00, 95% CI 0.58-1.71; P=.99). Although the hospital-wide cardiac arrest rate fell significantly at the intervention site (OR 0.55, 95% CI 0.38-0.76; P<.001), DID analysis with the comparator site was not significant (OR 1.13, 95% CI 0.63-1.99; P=.69). There was no impact on other secondary clinical outcomes. Mean health care costs per patient were reduced by £2123 (95% CI −£4024 to −£222; P=.03), not including costs of providing the technology. Conclusions The digitally enabled clinical intervention to detect and treat AKI in hospitalized patients reduced health care costs and possibly reduced cardiac arrest rates. Its impact on other clinical outcomes and identification of the active components of the pathway requires clarification through evaluation across multiple sites.


Author(s):  
Courtney Cox ◽  
Krishna Patel ◽  
Rebecca Cantu ◽  
Chary Akmyradov ◽  
Katherine Irby

OBJECTIVE: Status asthmaticus is commonly treated in pediatric patients by using continuous albuterol, which can cause hypokalemia. The primary aim of this study was to determine if serial potassium monitoring is necessary by examining treatment frequency of hypokalemia. METHODS: This retrospective analysis was performed in 185 pediatric patients admitted with status asthmaticus requiring continuous albuterol between 2017 and 2019. All patients were placed on intravenous fluids containing potassium. The primary outcome measure was the treatment of hypokalemia in relation to the number of laboratory draws for potassium levels. The secondary outcome measure was hypokalemia frequency and relation to the duration and initial dose of continuous albuterol. RESULTS: Included were 156 patients with 420 laboratory draws (average, 2.7 per patient) for potassium levels. The median lowest potassium level was 3.40 mmol/L (interquartile range, 3.2–3.7). No correlation was found between initial albuterol dose and lowest potassium level (P = .52). Patients with hypokalemia had a mean albuterol time of 12.32 (SD, 15.76) hours, whereas patients without hypokalemia had a mean albuterol time of 11.50 (SD, 12.53) hours (P = .29). Potassium levels were treated 13 separate times. CONCLUSIONS: The number of laboratory draws for potassium levels was high in our cohort, with few patients receiving treatment for hypokalemia beyond the potassium routinely added to maintenance fluids. Length of time on albuterol and dose of albuterol were not shown to increase the risk of hypokalemia. Serial laboratory measurements may be decreased to potentially reduce health care costs, pain, and anxiety surrounding needlesticks.


2019 ◽  
Vol 30 (7) ◽  
pp. 1294-1304 ◽  
Author(s):  
Amit X. Garg ◽  
Neal Badner ◽  
Sean M. Bagshaw ◽  
Meaghan S. Cuerden ◽  
Dean A. Fergusson ◽  
...  

BackgroundSafely reducing red blood cell transfusions can prevent transfusion-related adverse effects, conserve the blood supply, and reduce health care costs. Both anemia and red blood cell transfusion are independently associated with AKI, but observational data are insufficient to determine whether a restrictive approach to transfusion can be used without increasing AKI risk.MethodsIn a prespecified kidney substudy of a randomized noninferiority trial, we compared a restrictive threshold for red blood cell transfusion (transfuse if hemoglobin<7.5 g/dl, intraoperatively and postoperatively) with a liberal threshold (transfuse if hemoglobin<9.5 g/dl in the operating room or intensive care unit, or if hemoglobin<8.5 g/dl on the nonintensive care ward). We studied 4531 patients undergoing cardiac surgery with cardiopulmonary bypass who had a moderate-to-high risk of perioperative death. The substudy’s primary outcome was AKI, defined as a postoperative increase in serum creatinine of ≥0.3 mg/dl within 48 hours of surgery, or ≥50% within 7 days of surgery.ResultsPatients in the restrictive-threshold group received significantly fewer transfusions than patients in the liberal-threshold group (1.8 versus 2.9 on average, or 38% fewer transfusions in the restricted-threshold group compared with the liberal-threshold group; P<0.001). AKI occurred in 27.7% of patients in the restrictive-threshold group (624 of 2251) and in 27.9% of patients in the liberal-threshold group (636 of 2280). Similarly, among patients with preoperative CKD, AKI occurred in 33.6% of patients in the restrictive-threshold group (258 of 767) and in 32.5% of patients in the liberal-threshold group (252 of 775).ConclusionsAmong patients undergoing cardiac surgery, a restrictive transfusion approach resulted in fewer red blood cell transfusions without increasing the risk of AKI.


2009 ◽  
Vol 12 (7) ◽  
pp. A484-A485
Author(s):  
M Angalakuditi ◽  
R Pokrzywinski ◽  
B Currie ◽  
W Lenderking

1998 ◽  
Vol 16 (3) ◽  
pp. 166-172
Author(s):  
Christopher T. Glenn ◽  
Frank A. Blazich ◽  
Stuart L. Warren

Abstract Following harvest of capsules, drying, and seed extraction, seeds of Kalmia latifolia L. (mountain laurel), Leucothoe fontanesiana (Steud.) Sleum (drooping leucothoe), Rhododendron carolinianum Rehd. (Carolina rhododendron), Rhododendron catawbiense Michx. (Catawba rhododendron), and Rhododendron maximum L. (rosebay rhododendron) were stored for 0, 1, 2, 3, 4 or 5 years at −18,4 or 23C (0, 39 or 73F) and then germinated at 25C (77F) or an 8/16 hr thermoperiod of 25/15C (77/59F) with daily photoperiods of 0, 1 or 24 hr. Storage at −18 or 4C (0 or 39F) were most effective for maintaining seed viability of all species. After 5 years storage at −18 or 4C (0 or 39F), viability of L. fontanesiana, R. catawbiense, and R. maximum was relatively unchanged with total germination of 59%, 87%, and 88%, respectively. The same was noted for seeds of K. latifolia and R. carolinianum with total germination of 77% and 91%, respectively, after storage for 4 years at the same temperatures. Storage at 23C (73F) was the least effective for maintaining viability. After storage for 1 year at 23C (73F), germination decreased significantly for all species except R. carolinianum. By year 3, storage at 23C (73F) reduced seed viability of L. fontanesiana to essentially zero. The same occurred by year 4 for seeds of R. catawbiense and R. maximum stored at 23C (73F). Viability of K. latifolia also decreased under storage at 23C (73F) with germination of 14% noted by year 4. Viability of R. carolinianum did not decrease as rapidly as the other species when stored at 23C (73F) with total germination of 77% occurring by year 4. Regardless of storage duration, the photoperiod and temperature requirements for maximum germination of all species did not change.


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 18-18
Author(s):  
Maricruz Rivera-Hernandez ◽  
Aaron Castillo ◽  
Amal Trivedi

Abstract Medicare enrollment among people with Alzheimer’s Disease and Related Dementias (ADRD) has reached an all-time high with about 12% of beneficiaries having an ADRD diagnosis. The federal government has special interest in providing healthcare alternatives for Medicare beneficiaries. However, limited studies have focused on understanding disenrollment from fee-for-service, especially among those with high-needs. In this study we identified predictors of disenrollment among beneficiaries with ADRD. We used the 2017-2018 Medicare Master Beneficiary Summary File to determine enrollment, sociodemographic, clinical characteristics and healthcare utilization. We included all fee-for-service beneficiaries enrolled in 2017 who survived the first quarter of 2018. Our primary outcome was disenrollment from fee-for-service between 2017 and 2018. Regression models included age, sex, race/ethnicity, dually eligibility to Medicare and Medicaid, chronic and disabling conditions (categorized by quartiles), total health care costs including outpatient, inpatient, post-acute care and other costs (categorized by quartiles) and county fixed-effects. There were 1,797,047 beneficiaries enrolled in fee-for-service with an ADRD diagnosis. Stronger predictors of disenrollment included race/ethnicity and dual eligibility. Disenrollment rates were 7.9% (95% CI, 7.2 – 8.5) among African Americans, 6.6 (95% CI, 6.2 – 7.0) among Hispanics and 4.3 (95% CI, 4.2 – 4.3) among Whites. Duals were 1.9% (95% CI, 1.4 – 2.3) more likely to disenroll from fee-for-service to Medicare Advantage (MA). The inclusion of MA special need plans and additional benefits for those with ADRD and complex chronic conditions may be valuable for those beneficiaries with ADRD, and who may not have Medigap coverage when enrolling in fee-for-service.


Sign in / Sign up

Export Citation Format

Share Document