scholarly journals Real-time utilisation of administrative data in the ED to identify older patients at risk: development and validation of the Dynamic Silver Code

BMJ Open ◽  
2019 ◽  
Vol 9 (12) ◽  
pp. e033374 ◽  
Author(s):  
Daniela Balzi ◽  
Giulia Carreras ◽  
Francesco Tonarelli ◽  
Luca Degli Esposti ◽  
Paola Michelozzi ◽  
...  

ObjectiveIdentification of older patients at risk, among those accessing the emergency department (ED), may support clinical decision-making. To this purpose, we developed and validated the Dynamic Silver Code (DSC), a score based on real-time linkage of administrative data.Design and settingThe ‘Silver Code National Project (SCNP)’, a non-concurrent cohort study, was used for retrospective development and internal validation of the DSC. External validation was obtained in the ‘Anziani in DEA (AIDEA)’ concurrent cohort study, where the DSC was generated by the software routinely used in the ED.ParticipantsThe SCNP contained 281 321 records of 180 079 residents aged 75+ years from Tuscany and Lazio, Italy, admitted via the ED to Internal Medicine or Geriatrics units. The AIDEA study enrolled 4425 subjects aged 75+ years (5217 records) accessing two EDs in the area of Florence, Italy.InterventionsNone.Outcome measuresPrimary outcome: 1-year mortality. Secondary outcomes: 7 and 30-day mortality and 1-year recurrent ED visits.ResultsAdvancing age, male gender, previous hospital admission, discharge diagnosis, time from discharge and polypharmacy predicted 1-year mortality and contributed to the DSC in the development subsample of the SCNP cohort. Based on score quartiles, participants were classified into low, medium, high and very high-risk classes. In the SCNP validation sample, mortality increased progressively from 144 to 367 per 1000 person-years, across DSC classes, with HR (95% CI) of 1.92 (1.85 to 1.99), 2.71 (2.61 to 2.81) and 5.40 (5.21 to 5.59) in class II, III and IV, respectively versus class I (p<0.001). Findings were similar in AIDEA, where the DSC predicted also recurrent ED visits in 1 year. In both databases, the DSC predicted 7 and 30-day mortality.ConclusionsThe DSC, based on administrative data available in real time, predicts prognosis of older patients and might improve their management in the ED.

2020 ◽  
Author(s):  
Dennis Shung ◽  
Cynthia Tsay ◽  
Loren Laine ◽  
Prem Thomas ◽  
Caitlin Partridge ◽  
...  

Background and AimGuidelines recommend risk stratification scores in patients presenting with gastrointestinal bleeding (GIB), but such scores are uncommonly employed in practice. Automation and deployment of risk stratification scores in real time within electronic health records (EHRs) would overcome a major impediment. This requires an automated mechanism to accurately identify (“phenotype”) patients with GIB at the time of presentation. The goal is to identify patients with acute GIB by developing and evaluating EHR-based phenotyping algorithms for emergency department (ED) patients.MethodsWe specified criteria using structured data elements to create rules for identifying patients, and also developed a natural-language-processing (NLP)-based algorithm for automated phenotyping of patients, tested them with tenfold cross-validation (n=7144) and external validation (n=2988), and compared them with the standard method for encoding patient conditions in the EHR, Systematized Nomenclature of Medicine (SNOMED). The gold standard for GIB diagnosis was independent dual manual review of medical records. The primary outcome was positive predictive value (PPV).ResultsA decision rule using GIB-specific terms from ED triage and from ED review-of-systems assessment performed better than SNOMED on internal validation (PPV=91% [90%-93%] vs. 74% [71%-76%], P<0.001) and external validation (PPV=85% [84%-87%] vs. 69% [67%-71%], P<0.001). The NLP algorithm (external validation PPV=80% [79-82%]) was not superior to the structured-datafields decision rule.ConclusionsAn automated decision rule employing GIB-specific triage and review-of-systems terms can be used to trigger EHR-based deployment of risk stratification models to guide clinical decision-making in real time for patients with acute GIB presenting to the ED.


2020 ◽  
Vol 13 (9) ◽  
pp. 238 ◽  
Author(s):  
Cristina Müller ◽  
Roger Schibli ◽  
Britta Maurer

Herein, we discuss the potential role of folic acid-based radiopharmaceuticals for macrophage imaging to support clinical decision-making in patients with COVID-19. Activated macrophages play an important role during coronavirus infections. Exuberant host responses, i.e., a cytokine storm with increase of macrophage-related cytokines, such as TNFα, IL-1β, and IL-6 can lead to life-threatening complications, such as acute respiratory distress syndrome (ARDS), which develops in approximately 20% of the patients. Diverse immune modulating therapies are currently being tested in clinical trials. In a preclinical proof-of-concept study in experimental interstitial lung disease, we showed the potential of 18F-AzaFol, an 18F-labeled folic acid-based radiotracer, as a specific novel imaging tool for the visualization and monitoring of macrophage-driven lung diseases. 18F-AzaFol binds to the folate receptor-beta (FRβ) that is expressed on activated macrophages involved in inflammatory conditions. In a recent multicenter cancer trial, 18F-AzaFol was successfully and safely applied (NCT03242993). It is supposed that the visualization of activated macrophage-related disease processes by folate radiotracer-based nuclear imaging can support clinical decision-making by identifying COVID-19 patients at risk of a severe disease progression with a potentially lethal outcome.


2021 ◽  
Author(s):  
Ju Sun ◽  
Le Peng ◽  
Taihui Li ◽  
Dyah Adila ◽  
Zach Zaiman ◽  
...  

Importance: An artificial intelligence (AI)-based model to predict COVID-19 likelihood from chest x-ray (CXR) findings can serve as an important adjunct to accelerate immediate clinical decision making and improve clinical decision making. Despite significant efforts, many limitations and biases exist in previously developed AI diagnostic models for COVID-19. Utilizing a large set of local and international CXR images, we developed an AI model with high performance on temporal and external validation. Objective: Investigate real-time performance of an AI-enabled COVID-19 diagnostic support system across a 12-hospital system. Design: Prospective observational study. Setting: Labeled frontal CXR images (samples of COVID-19 and non-COVID-19) from the M Health Fairview (Minnesota, USA), Valencian Region Medical ImageBank (Spain), MIMIC-CXR, Open-I 2013 Chest X-ray Collection, GitHub COVID-19 Image Data Collection (International), Indiana University (Indiana, USA), and Emory University (Georgia, USA) Participants: Internal (training, temporal, and real-time validation): 51,592 CXRs; Public: 27,424 CXRs; External (Indiana University): 10,002 CXRs; External (Emory University): 2002 CXRs Main Outcome and Measure: Model performance assessed via receiver operating characteristic (ROC), Precision-Recall curves, and F1 score. Results: Patients that were COVID-19 positive had significantly higher COVID-19 Diagnostic Scores (median .1 [IQR: 0.0-0.8] vs median 0.0 [IQR: 0.0-0.1], p < 0.001) than patients that were COVID-19 negative. Pre-implementation the AI-model performed well on temporal validation (AUROC 0.8) and external validation (AUROC 0.76 at Indiana U, AUROC 0.72 at Emory U). The model was noted to have unrealistic performance (AUROC > 0.95) using publicly available databases. Real-time model performance was unchanged over 19 weeks of implementation (AUROC 0.70). On subgroup analysis, the model had improved discrimination for patients with severe as compared to mild or moderate disease, p < 0.001. Model performance was highest in Asians and lowest in whites and similar between males and females. Conclusions and Relevance: AI-based diagnostic tools may serve as an adjunct, but not replacement, for clinical decision support of COVID-19 diagnosis, which largely hinges on exposure history, signs, and symptoms. While AI-based tools have not yet reached full diagnostic potential in COVID-19, they may still offer valuable information to clinicians taken into consideration along with clinical signs and symptoms.


2020 ◽  
Vol 26 (Supplement_1) ◽  
pp. S67-S68
Author(s):  
Jeffrey Berinstein ◽  
Shirley Cohen-Mekelburg ◽  
Calen Steiner ◽  
Megan Mcleod ◽  
Mohamed Noureldin ◽  
...  

Abstract Background High-deductible health plan (HDHP) enrollment has increased rapidly over the last decade. Patients with HDHPs are incentivized to delay or avoid necessary medical care. We aimed to quantify the out-of-pocket costs of Inflammatory Bowel Disease (IBD) patients at risk for high healthcare resource utilization and to evaluate for differences in medical service utilization according to time in insurance period between HDHP and traditional health plan (THP) enrollees. Variations in healthcare utilization according to time may suggest that these patients are delaying or foregoing necessary medical care due to healthcare costs. Methods IBD patients at risk for high resource utilization (defined as recent corticosteroid and narcotic use) continuously enrolled in an HDHP or THP from 2009–2016 were identified using the Truven Health MarketScan database. Median annual financial information was calculated. Time trends in office visits, colonoscopies, emergency department (ED) visits, and hospitalizations were evaluated using additive decomposition time series analysis. Financial information and time trends were compared between the two insurance plan groups. Results Of 605,862 with a diagnosis of IBD, we identified 13,052 patients at risk for high resource utilization with continuous insurance plan enrollment. The median annual out-of-pocket costs were higher in the HDHP group (n=524) than in the THP group (n=12,458) ($1,920 vs. $1,205, p&lt;0.001), as was the median deductible amount ($1,015 vs $289, p&lt;0.001), without any difference in the median annual total healthcare expenses (Figure 1). Time in insurance period had a greater influence on utilization of colonoscopies, ED visits, and hospitalization in IBD patients enrolled in HDHPs compared to THPs (Figure 2). Colonoscopies peaked in the 4th quarter, ED visits peaked in the 1st quarter, and hospitalizations peaked in the 3rd and 4th quarter. Conclusion Among IBD patients at high risk for IBD-related utilization, HDHP enrollment does not change the cost of care, but shifts healthcare costs onto patients. This may be a result of HDHPs incentivizing delays with a potential for both worse disease outcomes and financial toxicity and needs to be further examined using prospective studies.


BMJ Open ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. e045895
Author(s):  
Rebecca Sharp ◽  
Peter Carr ◽  
Jessie Childs ◽  
Andrew Scullion ◽  
Mark Young ◽  
...  

ObjectivesDetermine the effect of the catheter to vein ratio (CVR) on rates of symptomatic thrombosis in individuals with a peripherally inserted central catheter (PICC) and identify the optimal CVR cut-off point according to diagnostic group.DesignRetrospective cohort study.Setting4 tertiary hospitals in Australia and New Zealand.ParticipantsAdults who had undergone PICC insertion.Primary outcome measureSymptomatic thrombus of the limb in which the PICC was inserted.Results2438 PICC insertions were included with 39 cases of thrombosis (1.6%; 95% CI 1.14% to 2.19%). Receiver operator characteristic analysis was unable to be performed to determine the optimal CVR overall or according to diagnosis. The association between risk of thrombosis and CVR cut-offs commonly used in clinical practice were analysed. A 45% cut-off (≤45% versus ≥46%) was predictive of thrombosis, with those with a higher ratio having more than twice the risk (relative risk 2.30; 95% CI 1.202 to 4.383; p=0.01). This pattern continued when only those with malignancy were included in the analysis, those with cancer had twice the risk of thrombosis with a CVR greater than 45%. Whereas the 33% CVR cut-off was not associated with statistically significant results overall or in those with malignancy. Neither the 33% or 45% CVR cut-off produced statistically significant results in those with infection or other non-malignant conditions.ConclusionsAdherence to CVR cut-offs are an important component of PICC insertion clinical decision making to reduce the risk of thrombosis. These results suggest that in individuals with cancer, the use of a CVR ≤45% should be considered to minimise risk of thrombosis. Further research is needed to determine the risk of thrombosis according to malignancy type and the optimal CVR for those with a non-malignant diagnosis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ana-Luisa Silva ◽  
Paulina Klaudyna Powalowska ◽  
Magdalena Stolarek ◽  
Eleanor Ruth Gray ◽  
Rebecca Natalie Palmer ◽  
...  

AbstractAccurate detection of somatic variants, against a background of wild-type molecules, is essential for clinical decision making in oncology. Existing approaches, such as allele-specific real-time PCR, are typically limited to a single target gene and lack sensitivity. Alternatively, next-generation sequencing methods suffer from slow turnaround time, high costs, and are complex to implement, typically limiting them to single-site use. Here, we report a method, which we term Allele-Specific PYrophosphorolysis Reaction (ASPYRE), for high sensitivity detection of panels of somatic variants. ASPYRE has a simple workflow and is compatible with standard molecular biology reagents and real-time PCR instruments. We show that ASPYRE has single molecule sensitivity and is tolerant of DNA extracted from plasma and formalin fixed paraffin embedded (FFPE) samples. We also demonstrate two multiplex panels, including one for detection of 47 EGFR variants. ASPYRE presents an effective and accessible method that simplifies highly sensitive and multiplexed detection of somatic variants.


2021 ◽  
pp. 205715852110617
Author(s):  
Mette Geil Kollerup ◽  
Birgitte Schantz Laursen

Transitional medication management, in which individual needs are balanced against organizational priorities, is crucial for safe discharge processes. The aim of this study was to explore hospital nurses’ transitional medication management in the discharge of older patients with multi-morbidity. Using an ethnographic approach the data were collected through participant observations at a mixed medical ward at a Danish university hospital for two weeks. The participants were five registered nurses, responsible for nursing care of 23 patients with multi-morbidity and planned for discharge. The data comprised field notes that were analysed using iterative processes of domain, taxonomic and component analysis. The reporting adhered to the COREQ checklist. Hospital nurses’ transitional medication management was characterized by unpredictability and inconsistency in patient situations, fragmentation and discontinuity in working processes and complexity in communication systems. Special attention to nurses’ needs assessment skills and clinical decision making in caring for patients with multi-morbidity in a single focused healthcare system is required.


2018 ◽  
Author(s):  
Robert Moss ◽  
Alexander E Zarebski ◽  
Sandra J Carlson ◽  
James M McCaw

AbstractFor diseases such as influenza, where the majority of infected persons experience mild (if any) symptoms, surveillance systems are sensitive to changes in healthcare-seeking and clinical decision-making behaviours. This presents a challenge when trying to interpret surveillance data in near-real-time (e.g., in order to provide public health decision-support). Australia experienced a particularly large and severe influenza season in 2017, perhaps in part due to (a) mild cases being more likely to seek healthcare; and (b) clinicians being more likely to collect specimens for RT-PCR influenza tests. In this study we used weekly Flutracking surveillance data to estimate the probability that a person with influenza-like illness (ILI) would seek healthcare and have a specimen collected. We then used this estimated probability to calibrate near-real-time seasonal influenza forecasts at each week of the 2017 season, to see whether predictive skill could be improved. While the number of self-reported influenza tests in the weekly surveys are typically very low, we were able to detect a substantial change in healthcare seeking behaviour and clinician testing behaviour prior to the high epidemic peak. Adjusting for these changes in behaviour in the forecasting framework improved predictive skill. Our analysis demonstrates a unique value of community-level surveillance systems, such as Flutracking, when interpreting traditional surveillance data.


2021 ◽  
Vol 28 (1) ◽  
pp. e100267
Author(s):  
Keerthi Harish ◽  
Ben Zhang ◽  
Peter Stella ◽  
Kevin Hauck ◽  
Marwa M Moussa ◽  
...  

ObjectivesPredictive studies play important roles in the development of models informing care for patients with COVID-19. Our concern is that studies producing ill-performing models may lead to inappropriate clinical decision-making. Thus, our objective is to summarise and characterise performance of prognostic models for COVID-19 on external data.MethodsWe performed a validation of parsimonious prognostic models for patients with COVID-19 from a literature search for published and preprint articles. Ten models meeting inclusion criteria were either (a) externally validated with our data against the model variables and weights or (b) rebuilt using original features if no weights were provided. Nine studies had internally or externally validated models on cohorts of between 18 and 320 inpatients with COVID-19. One model used cross-validation. Our external validation cohort consisted of 4444 patients with COVID-19 hospitalised between 1 March and 27 May 2020.ResultsMost models failed validation when applied to our institution’s data. Included studies reported an average validation area under the receiver–operator curve (AUROC) of 0.828. Models applied with reported features averaged an AUROC of 0.66 when validated on our data. Models rebuilt with the same features averaged an AUROC of 0.755 when validated on our data. In both cases, models did not validate against their studies’ reported AUROC values.DiscussionPublished and preprint prognostic models for patients infected with COVID-19 performed substantially worse when applied to external data. Further inquiry is required to elucidate mechanisms underlying performance deviations.ConclusionsClinicians should employ caution when applying models for clinical prediction without careful validation on local data.


Sign in / Sign up

Export Citation Format

Share Document