scholarly journals International practice variation in perioperative laboratory testing in glioblastoma patients—a retrospective cohort study

Author(s):  
Joeky T. Senders ◽  
Sybren L. N. Maas ◽  
Kaspar Draaisma ◽  
John J. McNulty ◽  
Joanna L. Ashby ◽  
...  

Abstract Purpose Although standard-of-care has been defined for the treatment of glioblastoma patients, substantial practice variation exists in the day-to-day clinical management. This study aims to compare the use of laboratory tests in the perioperative care of glioblastoma patients between two tertiary academic centers—Brigham and Women’s Hospital (BWH), Boston, USA, and University Medical Center Utrecht (UMCU), Utrecht, the Netherlands. Methods All glioblastoma patients treated according to standard-of-care between 2005 and 2013 were included. We compared the number of blood drawings and laboratory tests performed during the 70-day perioperative period using a Poisson regression model, as well as the estimated laboratory costs per patient. Additionally, we compared the likelihood of an abnormal test result using a generalized linear mixed effects model. Results After correction for age, sex, IDH1 status, postoperative KPS score, length of stay, and survival status, the number of blood drawings and laboratory tests during the perioperative period were 3.7-fold (p < 0.001) and 4.7-fold (p < 0.001) higher, respectively, in BWH compared to UMCU patients. The estimated median laboratory costs per patient were 82 euros in UMCU and 256 euros in BWH. Furthermore, the likelihood of an abnormal test result was lower in BWH (odds ratio [OR] 0.75, p < 0.001), except when the prior test result was abnormal as well (OR 2.09, p < 0.001). Conclusions Our results suggest a substantially lower clinical threshold for ordering laboratory tests in BWH compared to UMCU. Further investigating the clinical consequences of laboratory testing could identify over and underuse, decrease healthcare costs, and reduce unnecessary discomfort that patients are exposed to.

2019 ◽  
Vol 152 (Supplement_1) ◽  
pp. S149-S149
Author(s):  
Jared Coberly ◽  
Emily Coberly ◽  
Katie Dettenwanger ◽  
Brandi Ross ◽  
Robert Pierce

Abstract Introduction Unnecessary and inappropriate laboratory testing contributes to increased health care costs, increases length of stay, and increases odds for blood product transfusion. The Choosing Wisely campaign recommends a judicious use of laboratory blood testing to combat iatrogenic anemia. Reducing the number of duplicate test orders may help address these issues. We evaluated duplicate order alert thresholds in our electronic health record for 10 common laboratory tests at an academic medical center. Methods In January 2019, alert intervals for 10 common inpatient laboratory tests (thyroid stimulating hormone, complete blood count, hemoglobin A1c, troponin, lactic acid, hemoglobin and hematocrit, urinalysis, vitamin D, urine beta HCG, and triglycerides) were adjusted to evidence-based, disease-specific thresholds. If a test was ordered within a timeframe shorter than this threshold, an alert interrupted the provider’s workflow. The provider was allowed to override the alert based on clinical judgment. This is a change from the previous settings, which alerted any test if ordered more frequently than 8 hours. Postintervention duplicate order alerts were compared to baseline rates and adjusted for number of inpatient discharges. Results In total, 914 orders were cancelled in 1 month as a result of tailored duplicate order alerts versus the baseline mean of 710 (95% CI, 633-786) and a predicted 552 (95% CI, 475-628) when adjusted for number of inpatient discharges, with the majority of cancelled orders being for CBC (530 accepted alerts). Overall, this reduction in unnecessary duplicate tests is equivalent to 3,092 mL of blood not collected from patients per month. Conclusion Tailoring duplicate order alert interval thresholds to evidence-based criteria helps reduce unnecessary testing, reduces costs, and may play an important role in reducing hospital-acquired anemia.


Blood ◽  
2019 ◽  
Vol 134 (Supplement_1) ◽  
pp. 2371-2371 ◽  
Author(s):  
Joshua Kra ◽  
Helen Horng

Introduction: Heparin induced thrombocytopenia (HIT) occurs in up to 5% of adults exposed to heparin due to formation of heparin-dependent antibodies to the heparin/platelet factor 4 (PF4) complex. Patients develop thrombocytopenia and are at risk for severe thrombotic complications. Mortality rates can be as high as 20%, but are often much lower with prompt recognition, cessation of heparin, and treatment with alternative anticoagulants. However, these alternative medications are often both labor-intensive drips and expensive, highlighting the need for quick decision making in such challenging cases. The classic workup of HIT involves assessing clinical suspicion (often using the "4T score" on a scale of 0-8), followed by lab testing for detection of the heparin-dependent antibody and then a functional assay to confirm pathogenicity of the antibody. Our institution uses an in-house rapid Particle ImmunoFiltration Assay (PIFA), which is a same-day test and reported as "positive" or "negative." Other testing available includes send-out tests for Enzyme-Linked Immunoassay (ELISA) to IgG, A, and M of the PF4 antibody as well as the C-14 Serotonin Release Assay (SRA) functional assay. The goal of this retrospective review is to analyze the utility of an algorithmic approach to laboratory testing in aiding the rapidity of diagnosis of HIT without missing possible critical cases. Methods: This was a single institutional study at a large urban academic medical center. We reviewed inpatient charts from 2015-2018 of patients who had any lab testing for HIT. As per institutional guidelines, first-line testing is recommended by using the in-house same-day testing PIFA. If positive, a sample is automatically reflexed and sent-out for PF4 ELISA testing and SRA. Furthermore, clinicians are able to directly order ELISA and SRA testing at their medical discretion. Our analysis looked at those patients with at least two different "HIT-related" laboratory tests to best analyze the concordance and discordance rates of the above testing to assess for sensitivity, specificity, and overall accuracy of using a stepwise testing approach. We used a cutoff value of >0.4 optical density (OD) for ELISA testing, and >20% release at low-dose heparin concentration for SRA testing. Results: There were 118 patients who had at least two different HIT-related laboratory tests sent. 91 patients had both PIFA and ELISA testing, with 37/79 (47%) positive concordance rate and 6/12 (50%) negative concordance rate, for a sensitivity of 86% and specificity of 13%. 3 patients with positive PIFA also had positive SRA, and there were 2 patients with negative PIFA with positive SRA testing (see attached Table). When comparing ELISA testing to SRA, 4/41 (10%) had concordant positive testing, while no patient with a negative ELISA test had a positive SRA (28 concordant negative cases). Overall, of 94 SRA tests run, 5 were positive, of which 2/5 had negative PIFA and 0/4 had negative ELISA testing. Conclusions: While PIFA testing had a high sensitivity compared to ELISA, the overall accuracy compared to ELISA was low, while ELISA testing was 100% sensitive in this analysis. Furthermore, there was still a risk of missing cases of HIT using PIFA testing alone. In both cases of positive SRA with a negative PIFA, patients had a high 4T score of >6, consistent with a high clinical suspicion for HIT. We conclude that PIFA testing is not equivalent to ELISA testing, and that use of a laboratory "algorithm only" approach would be inappropriate in the diagnosis of HIT. Our results highlight the importance of using both clinical scoring systems and appropriate lab testing together in the workup and diagnosis of HIT. Disclosures No relevant conflicts of interest to declare.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Patrick Spraider ◽  
Gabriel Putzer ◽  
Robert Breitkopf ◽  
Julia Abram ◽  
Simon Mathis ◽  
...  

Abstract Background Flow-controlled ventilation (FCV) is a novel ventilation method increasingly being used clinically, particularly during the current COVID-19 pandemic. However, the continuous flow pattern in FCV during inspiration and expiration has a significant impact on respiratory parameters and ventilatory settings compared to conventional ventilation modes. In addition, the constant flow combined with direct intratracheal pressure measurement allows determination of dynamic compliance and ventilation settings can be adjusted accordingly, reflecting a personalized ventilation approach. Case presentation A 50-year old women with confirmed SARS-CoV-2 infection suffering from acute respiratory distress syndrome (ARDS) was admitted to a tertiary medical center. Initial ventilation occurred with best standard of care pressure-controlled ventilation (PCV) and was then switched to FCV, by adopting PCV ventilator settings. This led to an increase in oxygenation by 30 %. Subsequently, to reduce invasiveness of mechanical ventilation, FCV was individualized by dynamic compliance guided adjustment of both, positive end-expiratory pressure and peak pressure; this intervention reduced driving pressure from 18 to 12 cm H2O. However, after several hours, compliance further deteriorated which resulted in a tidal volume of only 4.7 ml/kg. Conclusions An individualized FCV approach increased oxygenation parameters in a patient suffering from severe COVID-19 related ARDS. Direct intratracheal pressure measurements allow for determination of dynamic compliance and thus optimization of ventilator settings, thereby reducing applied and dissipated energy. However, although desirable, this personalized ventilation strategy may reach its limits when lung function is so severely impaired that patient’s oxygenation has to be ensured at the expense of lung protective ventilation concepts.


2004 ◽  
Vol 128 (12) ◽  
pp. 1424-1427 ◽  
Author(s):  
Martha E. Laposata ◽  
Michael Laposata ◽  
Elizabeth M. Van Cott ◽  
Dion S. Buchner ◽  
Mohammed S. Kashalo ◽  
...  

Abstract Context.—Complex coagulation test panels ordered by clinicians are typically reported to clinicians without a patient-specific interpretive paragraph. Objectives.—To survey clinicians regarding pathologist-generated interpretations of complex laboratory testing panels and to assess the ability of the interpretations to educate test orderers. Design.—Surveys were conducted of physicians ordering complex coagulation laboratory testing that included narrative interpretation. Evaluation of order requisitions was performed to assess the interpretation's influence on ordering practices. Setting.—Physicians ordering coagulation testing at a large academic medical center hospital in Boston, Mass, and physicians from outside hospitals using the academic medical center as a reference laboratory for coagulation testing. Outcome Measures.—Physician surveys and evaluation of laboratory requisition slips. Results.—In nearly 80% of responses, the ordering clinicians perceived that the interpretive comments saved them time and improved the diagnostic process. Moreover, the interpretations were perceived by ordering clinicians to help prevent a misdiagnosis or otherwise impact the differential diagnosis in approximately 70% of responses. In addition, interpretations appeared to be able to train the ordering clinicians as to the standard ordering practices. Conclusions.—The results demonstrate physician satisfaction with an innovative information delivery approach that provides laboratory diagnostic interpretation and test-ordering education to clinicians in the context of their daily workflow.


2016 ◽  
Vol 56 (2) ◽  
pp. 88 ◽  
Author(s):  
Filip Dvořáček

<p>This paper describes laboratory tests on a Leica AT401laser tracker. As the newer Leica AT402 model also uses the same firmware package, most of the results should also be valid for this device. First, we present the instrument’s firmware errors and the software used for testing. The ASME B89.4.19-2006 standard for testing laser trackers is briefly presented. The warm-up effect of the instrument is inspected with respect to both angle measurement and distance measurement. The absolute distance meter (ADM) is compared with a laboratory interferometer on a 30-meter long rail and also on a bench with automated movement of the carriage of the reflector. A time series of measurements for determining the additive constant is evaluated. A simple test of the stability of the distance measurement in field conditions is introduced. Most of the tests were carried out at the Research Institute of Geodesy, Topography and Cartography (RIGTC) and at the Faculty of Civil Engineering (FCE) of the Czech Technical University in Prague (CTU).</p>


2014 ◽  
Vol 120 (1) ◽  
pp. 173-177 ◽  
Author(s):  
Seunggu J. Han ◽  
Rajiv Saigal ◽  
John D. Rolston ◽  
Jason S. Cheng ◽  
Catherine Y. Lau ◽  
...  

Object Given economic limitations and burgeoning health care costs, there is a need to minimize unnecessary diagnostic laboratory tests. Methods The authors studied whether a financial incentive program for trainees could lead to fewer unnecessary laboratory tests in neurosurgical patients in a large, 600-bed academic hospital setting. The authors identified 5 laboratory tests that ranked in the top 13 of the most frequently ordered during the 2010–2011 fiscal year, yet were least likely to be abnormal or influence patient management. Results In a single year of study, there was a 47% reduction in testing of serum total calcium, ionized calcium, chloride, magnesium, and phosphorus. This reduction led to a savings of $1.7 million in billable charges to health care payers and $75,000 of direct costs to the medical center. In addition, there were no significant negative changes in the quality of care delivered, as recorded in a number of metrics, showing that this cost savings did not negatively impact patient care. Conclusions Engaging physician trainees in quality improvement can be successfully achieved by financial incentives. Through the resident-led quality improvement incentive program, neurosurgical trainees successfully reduced unnecessary laboratory tests, resulting in significant cost savings to both the medical center and the health care system. Similar programs that engage trainees could improve the value of care being provided at other academic medical centers.


Author(s):  
Andrew Tsai ◽  
Oumou Diawara ◽  
Ronald G Nahass ◽  
Luigi Brunetti

Background The novel coronavirus disease 2019 (COVID-19) worldwide pandemic has placed a significant burden on hospitals and healthcare providers. The immune response to this disease is thought to lead to a cytokine storm, which contributes to the severity of illness. There is an urgent need to confirm whether the use of tocilizumab provides a benefit in individuals with COVID-19. Methods A single-center propensity-score matched cohort study, including all consecutive COVID-19 patients, admitted to the medical center who were either discharged from the medical center or expired between March 1, 2020, and May 5, 2020, was performed. Patients were stratified according to the receipt of tocilizumab for cytokine storm and matched to controls using propensity scores. The primary outcome was in-hospital mortality. Results A total of 132 patients were included in the matched dataset (tocilizumab=66; standard of care=66). Approximately 73% of the patients were male. Hypertension (55%), diabetes mellitus (31%), and chronic pulmonary disease (15%) were the most common comorbidities present. There were 18 deaths (27.3%) in the tocilizumab group and 18 deaths (27.3%) in the standard of care group (odds ratio, 1.0; 95% confidence interval, 0.465 - 2.151; p=1.00). Advanced age, history of myocardial infarction, dementia, chronic pulmonary disease, heart failure, and malignancy were significantly more common in patients who died. Interpretation The current analysis does not support the use of tocilizumab for the management of cytokine storm in patients with COVID-19. Use of this therapeutic agent should be limited to the context of a clinical trial until more evidence is available.


BJGP Open ◽  
2020 ◽  
pp. bjgpopen20X101146
Author(s):  
Claire Duddy ◽  
Geoff Wong

Background: Existing research demonstrates significant variation in test-ordering practice, and growth in the use of laboratory tests in primary care. Reviews of interventions designed to change test-ordering practice report heterogeneity in design and effectiveness. Improving understanding of clinicians’ decision making in relation to laboratory testing is an important means of understanding practice patterns and developing theory-informed interventions. Aim: To develop explanations for the underlying causes of patterns of variation and increasing use of laboratory tests in primary care and make recommendations for future research and intervention design. Design and setting: Realist review of secondary data from primary care. Method: Diverse evidence including data from qualitative and quantitative studies was gathered via systematic and iterative searching processes. Data was synthesised according to realist principles to develop explanations accounting for clinicians’ decision-making in relation to laboratory tests. Results: 145 documents contributed data to the synthesis. Laboratory test ordering can fulfil many roles in primary care. Decisions about tests are incorporated into practice heuristics and tests are deployed as a tool to manage patient interactions. Ordering tests may be easier than not ordering tests in existing systems. Alongside high workloads and limited time to devote to decision-making, there is a common perception that laboratory tests are relatively inconsequential interventions. Clinicians prioritise efficiency over thoroughness in decision-making about laboratory tests. Conclusions: Interventions to change test-ordering practice can be understood as aiming to preserve efficiency or encourage thoroughness in decision-making. Intervention designs and evaluations should consider how testing decisions are made in real-world clinical practice.


2015 ◽  
Vol 15 (1) ◽  
Author(s):  
Louis S. Nelson ◽  
Scott R. Davis ◽  
Robert M. Humble ◽  
Jeff Kulhavy ◽  
Dean R. Aman ◽  
...  

2020 ◽  
Vol 162 (4) ◽  
pp. 554-558
Author(s):  
Vaibhav H. Ramprasad ◽  
Amber D. Shaffer ◽  
Noel Jabbour

Objective Congenital ear anomalies are associated with congenital cardiac and renal defects. Renal ultrasounds, electrocardiogram, and echocardiogram can be utilized for diagnosis of these concurrent defects. No standard of care exists for the workup of patients with microtia. The goals of this study were to describe the utilization of diagnostic testing for cardiac and renal anomalies and to identify their prevalence in patients with microtia. Study Design Case series with chart review. Setting Children’s Hospital of Pittsburgh of the University of Pittsburgh Medical Center. Subjects and Methods This study is an Institutional Review Board–approved retrospective review of consecutive patients born between 2002 and 2016 who were diagnosed with microtia and seen in the otolaryngology clinic at a tertiary care children’s hospital. Demographics, sidedness and grade of microtia, comorbid diagnoses, and details of renal and cardiovascular evaluations were recorded. Factors associated with retroperitoneal ultrasound and cardiac testing were assessed with logistic regression. Results Microtia was present in 102 patients, and 98 patients were included as they received follow-up. Microtia was associated with craniofacial syndrome in 34.7% of patients. Renal ultrasound was performed in 64.3% of patients, and 12.9% of patients with ultrasounds had renal aplasia. Cardiac workup (electrocardiogram or echocardiogram) was completed in 60.2% of patients, and of this subset, 54.2% had a congenital heart defect. Conclusion Diagnostic testing revealed renal anomalies and cardiac defects in patients with isolated microtia at a higher rate than in the general population. This suggests the need for further evaluation of the role of routine screening in patients with microtia.


Sign in / Sign up

Export Citation Format

Share Document