scholarly journals Efficacy of BioFire FilmArray Gastrointestinal Panel (FGP) to Reduce Hospital Costs Associated With Contact Isolation: A Pragmatic Randomized Controlled Trial

Author(s):  
Giulio DiDiodato ◽  
Ashley Allan ◽  
Nellie Bradbury ◽  
Julia Brown ◽  
Kelly Cruise ◽  
...  

Abstract Background: Molecular syndromic panels can rapidly detect common pathogens responsible for acute gastroenteritis in hospitalized patients. Their impact on both patient and healthcare system outcomes is uncertain compared to conventional stool testing. This randomized trial evaluates the impact of molecular testing on in-hospital resource utilization compared to conventional stool testing.Methods: Hospitalized patients with acute diarrheal illness were randomized 1:1 to either conventional or molecular stool testing with the BioFire FilmArray gastrointestinal panel (FGP). The primary outcome was duration of contact isolation, and secondary outcomes included other in-hospital resource utilization such as diagnostic imaging and antimicrobial use. Results: A total of 156 patients were randomized. Randomization resulted in a balanced allocation of patients across all 3 age strata (<18, 18-69, ≥70 years old). The proportion of positive stools was 20.5% vs 29.5% in the control and FGP groups, respectively (p=0.196). The median duration of contact isolation was 51 hours (interquartile range (iqr) 66) and 69 hours (iqr 81) in the conventional and FGP groups, respectively (p=0.0513). There were no significant differences in other in-hospital resource utilization between groups. Conclusions: There were no differences in in-hospital resource utilization observed between FGP and conventional stool testing groups. Trials Registration: ClinicalTrials.gov NCT04189874

2021 ◽  
Vol 09 (03) ◽  
pp. E378-E387
Author(s):  
Konstantinos Triantafyllou ◽  
Paraskevas Gkolfakis ◽  
Alexandros Skamnelos ◽  
Georgia Diamantopoulou ◽  
Athanasios Dagas ◽  
...  

Abstract Background and study aims Bowel preparation for colonoscopy is frequently inadequate in hospitalized patients. We explored the impact of specific verbal instructions on the quality of inpatients bowel preparation and factors associated with preparation failure. Patients and methods Randomized (1:1), two strata (mobilized vs. bedridden; 3:2) trial of consecutive inpatients from four tertiary centers, who received either specific, verbal instructions or the standard of care (SOC) ward instructions about bowel preparation. The rate of adequate bowel preparation (Boston Bowel Preparation Score [BBPS] ≥ 6, no segment < 2) comprised the primary endpoint. Mean BBPS score, good (BBPS score ≥ 7, no segment score < 2) and excellent (BBPS = 9) were among secondary endpoints. Results We randomized 300 inpatients (180 mobile) aged 71.7 ± 15.1 years in the intervention (49.7 %) and SOC (50.3 %) groups, respectively. Overall, more patients in the intervention group achieved adequate bowel preparation, but this difference did not reach statistical significance neither in the intention-to-treat [90/149 (60.4 %) vs. 82/151 (54.3 %); P = 0.29] nor in the per-protocol analysis [90/129 (69.8 %) vs. 82/132 (62.1 %); P = 0.19]. Overall BBPS score did not differ statistical significantly in the two groups, but the provision of specific verbal instructions was associated with significant higher rates of good (58.1 % vs. 43.2 %; P = 0.02) and excellent (31.8 % vs. 16.7 %; P = 0.004) bowel preparation compared to the SOC group. Administration of same-day bowel preparation and patient American Society of Anesthesiologists score > 2 were identified as risk factors for inadequate bowel preparation. Conclusions Provision of specific verbal instructions did not increase the rate of adequate bowel preparation in a population of mobilized and bedridden hospitalized patients.


2015 ◽  
Vol 21 (8) ◽  
pp. 1366-1371 ◽  
Author(s):  
Sunita Mulpuru ◽  
Shawn D. Aaron ◽  
Paul E. Ronksley ◽  
Nadine Lawrence ◽  
Alan J. Forster

Trials ◽  
2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Feng Bin Lin ◽  
◽  
Shi Da Chen ◽  
Yun He Song ◽  
Wei Wang ◽  
...  

Abstract Background Currently, whether and when intraocular pressure (IOP)-lowering medication should be used in glaucoma suspects with high myopia (GSHM) remains unknown. Glaucoma suspects are visual field (VF) defects that cannot be explained by myopic macular changes or other retinal and neurologic conditions. Glaucoma progression is defined by VF deterioration. Here we describe the rationale, design, and methodology of a randomized controlled trial (RCT) designed to evaluate the effects of medically lowering IOP in GSHM (GSHM study). Methods The GSHM study is an open-label, single-center, RCT for GSHM. Overall, 264 newly diagnosed participants, aged 35 to 65 years, will be recruited at the Zhongshan Ophthalmic Center, Sun Yat-sen University, between 2020 and 2021. Participants will be randomly divided into two arms at a 1:1 ratio. Participants in the intervention arm will receive IOP-lowering medication, while participants in the control arm will be followed up without treatment for 36 months or until they reach the end point. Only one eye per participant will be eligible for the study. If both eyes are eligible, the eye with the worse VF will be recruited. The primary outcome is the incidence of glaucoma suspect progression by VF testing over 36 months. The secondary outcomes include the incidence of changes in the optic nerve head morphology including the retinal nerve fiber layer, and retinal ganglion cell-inner plexiform layer loss, progression of myopic maculopathy, visual function loss, and change in the quality of life. Statistical analyses will include baseline characteristics comparison between the intervention and control groups using a two-sample t-test and Wilcoxon rank sum test; generalized linear models with Poisson regression for the primary outcome; Kaplan-Meier curve and log-rank test for the incidence of the secondary outcome; and longitudinal analyses to assess trends in outcomes across time. Discussion To the best of our knowledge, the GSHM study is the first RCT to investigate the impact of medically lowering IOP in GSHM. The results will have implications for the clinical management of GSHM. Trial registration ClinicalTrials.gov NCT04296916. Registered on 4 March 2020


2021 ◽  
Author(s):  
Philippe Bégin ◽  
Jeannie Callum ◽  
Erin Jamulae Jamula ◽  
Richard Cook ◽  
Nancy M Heddle ◽  
...  

The efficacy of convalescent plasma for COVID-19 is unclear. While most randomized controlled trials have shown negative results, uncontrolled studies have suggested that the antibody content may influence patient outcomes. We conducted an open-label, randomized controlled trial of convalescent plasma for adults with COVID-19 receiving oxygen within 12 days of respiratory symptom onset. Patients were allocated 2:1 to 500 mL of convalescent plasma or standard of care. The composite primary outcome was intubation or death by 30 days. The effect of convalescent plasma antibodies on the primary outcome was assessed by logistic regression. The trial was terminated at 78% of planned enrollment after meeting stopping criteria for futility. 940 patients were randomized and 921 patients were included in the intent-to-treat analysis. Intubation or death occurred in 199/614 (32.4%) in the convalescent plasma arm and 86/307 (28.0%) in the standard of care arm; relative risk (RR) 1.16 (95% confidence interval (CI) 0.94-1.43; p=0.18). Patients in the convalescent plasma arm had more serious adverse events (33.4% vs. 26.4%; RR=1.27, 95% CI 1.02-1.57, p=0.034). The antibody content significantly modulated the therapeutic effect of convalescent plasma. In multivariate analysis, each standard log increase in neutralization or antibody-dependent cellular cytotoxicity independently reduced the potential harmful effect of plasma (OR=0.74; 0.57-0.95 and OR=0.66; 0.50-0.87, respectively), while IgG against the full transmembrane Spike protein increased it (OR=1.53, 95% CI 1.14-2.05). Convalescent plasma did not reduce the risk of intubation or death at 30 days among hospitalized patients with COVID-19. Transfusion of convalescent plasma with unfavourable antibody profiles may be associated with worse clinical outcomes compared to standard care.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S353-S353
Author(s):  
Cynthia T Nguyen ◽  
Oumaima Sahbani ◽  
Jennifer Pisano ◽  
Ken Pursell ◽  
Natasha N Pettit

Abstract Background Reported β-lactam allergies are common and are associated with inappropriate antibiotic therapy, poor clinical outcomes, and increased hospital costs. Documentation of β-lactam reactions is often incomplete and many patients with a reported allergy can tolerate a β-lactam antibiotic. This study aims to evaluate the impact of a standardized interviewing tool used by pharmacists on the quality of β-lactam allergy documentation. Methods This is a single-center, prospective, quasi-experimental study of adult inpatients. Patients were included if they had a documented β-lactam allergy, were interviewed by a pharmacist utilizing a standardized tool, and had the β-lactam allergy updated in the electronic medical record. The primary outcome was the percentage of patients with a complete allergy history documented. A complete allergy history was defined as including a description of the type of reaction, time of the reaction, and timing of the reaction. Secondary endpoints included the documentation of individual allergy history components, including if interventions were required to manage the reaction, tolerance of other β-lactams and receipt of penicillin skin testing in the past. A subgroup analysis was also performed among patients who received antibiotics during the admission evaluating antibiotic use, length of stay, mortality, and readmission. Results The study included 107 patients. The average time to complete an interview was 14.8 minutes. After the interview, 11 (10%) patients had the β-lactam allergy label removed. Consequently 107 allergy labels were evaluated in the pre-interview arm and 96 allergy labels in the post-interview arm. More patients had a documented complete allergy history after pharmacist intervention (39% vs. 0%, P &lt; 0.001). Documentation of all components of the allergy history improved after the interview (Table 1). Additionally, the amount of patients with an unknown reaction significantly declined (21% vs. 6%, P = 0.004). Conclusion The use of a standardized β-lactam allergy interview tool improved the quality of allergy documentation, led to de-labeling of β-lactam allergies, and reduced the amount of unknown reactions. Disclosures All authors: No reported disclosures.


Author(s):  
Kirati Kengkla ◽  
Yuttana Wongsalap ◽  
Natthaya Chaomuang ◽  
Pichaya Suthipinijtham ◽  
Peninnah Oberdorfer ◽  
...  

Abstract Objective: To assess the impact of carbapenem resistance and delayed appropriate antibiotic therapy (DAAT) on clinical and economic outcomes among patients with Enterobacterales infection. Methods: This retrospective cohort study was conducted in a tertiary-care medical center in Thailand. Hospitalized patients with Enterobacterales infection were included. Infections were classified as carbapenem-resistant Enterobacterales (CRE) or carbapenem-susceptible Enterobacterales (CSE). Multivariate Cox proportional hazard modeling was used to examine the association between CRE with DAAT and 30-day mortality. Generalized linear models were used to examine length of stay (LOS) and in-hospital costs. Results: In total, 4,509 patients with Enterobacterales infection (age, mean 65.2 ±18.7 years; 43.3% male) were included; 627 patients (13.9%) had CRE infection. Among these CRE patients, 88.2% received DAAT. CRE was associated with additional medication costs of $177 (95% confidence interval [CI], 114–239; P < .001) and additional in-hospital costs of $725 (95% CI, 448–1,002; P < .001). Patients with CRE infections had significantly longer LOS and higher mortality rates than patients with CSE infections: attributable LOS, 7.3 days (95% CI, 5.4–9.1; P < .001) and adjusted hazard ratios (aHR), 1.55 (95% CI, 1.26–1.89; P < .001). CRE with DAAT were associated with significantly longer LOS, higher mortality rates, and in-hospital costs. Conclusion: CRE and DAAT are associated with worse clinical outcomes and higher in-hospital costs among hospitalized patients in a tertiary-care hospital in Thailand.


Author(s):  
Sue Hyun Kim ◽  
Myoung-jin Jang ◽  
Ho Young Hwang

Abstract Background This meta-analysis was conducted to evaluate the impact of perioperative use of beta-blocker (BB) on postoperative atrial fibrillation (POAF) after cardiac surgery other than isolated coronary artery bypass grafting (CABG). Methods Five online databases were searched. Studies were included if they (1) enrolled patients who underwent cardiac surgery other than isolated CABG and (2) demonstrated the impact of perioperative use of BB on POAF based on the randomized controlled trial or adjusted analysis. The primary outcome was the occurrence rates of POAF after cardiac surgery. A meta-regression and subgroup analysis were performed according to the proportion of patients with cardiac surgery other than isolated CABG and the timing of BB use, respectively. Results Thirteen articles (5 randomized and 8 nonrandomized studies: n = 25,496) were selected. Proportion of enrolled patients undergoing cardiac surgery other than isolated CABG ranged from 7 to 100%. The BBs were used in preoperative, postoperative, and both periods in 5, 5, and 3 studies, respectively. The pooled analyses showed that the risk of POAF was significantly lower in patients with perioperative BB than those without (odds ratio, 95% confidence interval = 0.56, 0.35–0.91 and 0.70, 0.55–0.91 in randomized and nonrandomized studies, respectively). The risk of POAF was lower in the BB group irrespective of the proportion of nonisolated CABG. Benefit regarding in-hospital mortality was inconclusive. Perioperative stroke and length of stay were not significantly different between BB and non-BB groups. Conclusions Perioperative use of BB is effective in preventing POAF even in patients undergoing cardiac surgery other than isolated CABG, although it did not translate into improved clinical outcomes.


2009 ◽  
Vol 18 (1) ◽  
pp. 54-58 ◽  
Author(s):  
Paul McCrone ◽  
Sonia Johnson ◽  
Fiona Nolan ◽  
Stephen Pilling ◽  
Andrew Sandor ◽  
...  

SummaryAims – The use of specialised services to avoid admission to hospital for people experiencing mental health crises is seen as an integral part of psychiatric services in some countries. The aim of this paper is to assess the impact on costs and costeffectiveness of a crisis resolution team (CRT). Methods – Patients who were experiencing mental health crises sufficient for admission to be considered were randomised to either care provided by a CRT or standard services. The primary outcome measure was inpatient days over a six-month follow-up period. Service use was measured, costs calculated and cost-effectiveness assessed. Results – Patients receiving care from the CRT had non-inpatient costs £768 higher than patients receiving standard care (90% CI, £153 to £1375). With the inclusion of inpatient costs the costs for the CRT group were £2438 lower for the CRT group (90% CI, £937 to £3922). If one less day spent as an inpatient was valued at £100, there would be a 99.5% likelihood of the CRT being costeffective. Conclusion – This CRT was shown to be cost-effective for modest values placed on reductions in inpatient stays.


2015 ◽  
Vol 19 (13) ◽  
pp. 1-212 ◽  
Author(s):  
John L Campbell ◽  
Emily Fletcher ◽  
Nicky Britten ◽  
Colin Green ◽  
Tim Holt ◽  
...  

BackgroundTelephone triage is proposed as a method of managing increasing demand for primary care. Previous studies have involved small samples in limited settings, and focused on nurse roles. Evidence is limited regarding the impact on primary care workload, costs, and patient safety and experience when triage is used to manage patients requesting same-day consultations in general practice.ObjectivesIn comparison with usual care (UC), to assess the impact of GP-led telephone triage (GPT) and nurse-led computer-supported telephone triage (NT) on primary care workload and cost, patient experience of care, and patient safety and health status for patients requesting same-day consultations in general practice.DesignPragmatic cluster randomised controlled trial, incorporating economic evaluation and qualitative process evaluation.SettingGeneral practices (n = 42) in four regions of England, UK (Devon, Bristol/Somerset, Warwickshire/Coventry, Norfolk/Suffolk).ParticipantsPatients requesting same-day consultations.InterventionsPractices were randomised to GPT, NT or UC. Data collection was not blinded; however, analysis was conducted by a statistician blinded to practice allocation.Main outcome measuresPrimary – primary care contacts [general practice, out-of-hours primary care, accident and emergency (A&E) and walk-in centre attendances] in the 28 days following the index consultation request. Secondary – resource use and costs, patient safety (deaths and emergency hospital admissions within 7 days of index request, and A&E attendance within 28 days), health status and experience of care.ResultsOf 20,990 eligible randomised patients (UCn = 7283; GPTn = 6695; NTn = 7012), primary outcome data were analysed for 16,211 patients (UCn = 5572; GPTn = 5171; NTn = 5468). Compared with UC, GPT and NT increased primary outcome contacts (over 28-day follow-up) by 33% [rate ratio (RR) 1.33, 95% confidence interval (CI) 1.30 to 1.36] and 48% (RR 1.48, 95% CI 1.44 to 1.52), respectively. Compared with GPT, NT was associated with a marginal increase in primary outcome contacts by 4% (RR 1.04, 95% CI 1.01 to 1.08). Triage was associated with a redistribution of primary care contacts. Although GPT, compared with UC, increased the rate of overall GP contacts (face to face and telephone) over the 28 days by 38% (RR 1.38, 95% CI 1.28 to 1.50), GP face-to-face contacts were reduced by 39% (RR 0.61, 95% CI 0.54 to 0.69). NT reduced the rate of overall GP contacts by 16% (RR 0.84, 95% CI 0.78 to 0.91) and GP face-to-face contacts by 20% (RR 0.80, 95% CI 0.71 to 0.90), whereas nurse contacts increased. The increased rate of primary care contacts in triage arms is largely attributable to increased telephone contacts. Estimated overall patient–clinician contact time on the index day increased in triage (GPT = 10.3 minutes; NT = 14.8 minutes; UC = 9.6 minutes), although patterns of clinician use varied between arms. Taking account of both the pattern and duration of primary outcome contacts, overall costs over the 28-day follow-up were similar in all three arms (approximately £75 per patient). Triage appeared safe, and no differences in patient health status were observed. NT was somewhat less acceptable to patients than GPT or UC. The process evaluation identified the complexity associated with introducing triage but found no consistency across practices about what works and what does not work when implementing it.ConclusionsIntroducing GPT or NT was associated with a redistribution of primary care workload for patients requesting same-day consultations, and at similar cost to UC. Although triage seemed to be safe, investigation of the circumstances of a larger number of deaths or admissions after triage might be warranted, and monitoring of these events is necessary as triage is implemented.Trial registrationCurrent Controlled Trials ISRCTN20687662.FundingThis project was funded by the NIHR Health Technology Assessment programme and will be published in full inHealth Technology Assessment; Vol. 19, No. 13. See the NIHR Journals Library website for further project information.


Sign in / Sign up

Export Citation Format

Share Document