scholarly journals Use of the Electronic Health Record to Track Continuity of Care in Neurological Surgery Residency

2014 ◽  
Vol 6 (3) ◽  
pp. 507-511 ◽  
Author(s):  
N. Scott Litofsky ◽  
Ali Farooqui ◽  
Tomoko Tanaka ◽  
Thor Norregaard

Abstract Background Continuity of care in neurological surgery includes preoperative planning, technical and cognitive operative experience, and postoperative follow-up. Determining the extent of continuity of care with duty hour limits is problematic. Objective We used electronic health record data to track continuity of care in a neurological surgery program and to assess changes in rotation requirements. Methods The electronic health record was surveyed for all dictated resident–neurological surgery patient encounters (excluding progress notes), discharge summaries, and bedside procedures (July 2009–November 2011). Encounters were designated as preoperative, operative, or postoperative and were grouped by postgraduate year (PGY)–1 through PGY-6. Results A total of 6382 dictations were reviewed, with 5231 (82.0%) pertinent to neurological surgery. Of the 1469 operative notes, 303 (20.6%) had a record of an encounter with the operating resident in either a postoperative or preoperative setting. Preoperative encounters totaled 10.1% (148 of 1469); postoperative, 5.1% (75 of 1469); and encounters with both were 5.4% (80 of 1469). Continuity of care was as follows: PGY-1, 13.8% (4 of 29); PGY-2, 17.4% (26 of 149); PGY-3, 29.0% (36 of 124); PGY-4, 24.8% (73 of 294); PGY-5, 28.8% (109 of 379); and PGY-6, 11.1% (55 of 494). One of the highest continuity rates was observed in a rotation specifically constructed to enhance continuity of care. Conclusions The electronic health record can be used to track resident continuity of care in neurological surgery. The primary operating resident saw the patient in nonoperative settings, such as general admission, clinic visitation, or consultation in 20.6% (303 of 1469) of cases.

Circulation ◽  
2021 ◽  
Vol 143 (Suppl_1) ◽  
Author(s):  
Nrupen A Bhavsar ◽  
John Pura ◽  
Ann Marie Navar ◽  
Anne Hellkamp ◽  
Paul Muntner ◽  
...  

Introduction: Studies using electronic health record (EHR) data often have a limited number of years available for analysis. There is a trade off between the look back period length used to define baseline characteristics and follow up duration used to define outcomes. Objective: Quantify the impact of 6, 12, and 24 month look back periods on the association between diabetes (DM) and subsequent cardiovascular (CV) events using EHR data alone and in combination with Medicare claims. Methods: EHR data from an academic health system and a federally qualified health center from 2009-2014 were linked to Medicare claims data. Eligibility criteria were age ≥65 years, Durham County address, 24 months of continuous enrollment after first claim, EHR encounter in the 2011 index year, and no history of cardiovascular disease (CVD) in the 24 months prior to the index date (i.e., look back period). DM was defined using EHR ICD-9 codes, HbA1c ≥6.5%, or glucose lowering medication, and using claims based diagnosis codes or glucose lowering medication. The outcome was a major CV event (myocardial infarction, stroke, or cardiac procedure) defined by diagnosis or procedure codes. Hazard ratios (HR) compared time to the outcome between patients with and without DM. Results: In 5473 patients, mean age was 77 years, 67% were female and 28% were Black. The prevalence of DM using EHR data only increased with a longer look back period (6 months [19%]; 12 months [21%]; 24 months [23%]) but was less than the prevalence using all available data from EHRs and claims together (28%) (Table 1A). Shorter look back periods resulted in higher HRs (6 month HR=1.64) as compared to HRs from longer look back periods (24 month HR=1.41) using EHR data alone or all available data from the EHR and claims together (HR=1.43) (Table 1B). Conclusions: To avoid over estimating associations, studies of CVD using EHR data to identify baseline conditions may want to use 12-24 month look back periods in the absence of additional administrative data. This may also lead to a shorter follow-up period.


BMJ Open ◽  
2018 ◽  
Vol 8 (7) ◽  
pp. e021505 ◽  
Author(s):  
James H Flory ◽  
Scott Justin Keating ◽  
David Siscovick ◽  
Alvin I Mushlin

ObjectivesNon-persistence may be a significant barrier to the use of metformin. Our objective was to assess reasons for metformin non-persistence, and whether initial metformin dosing or use of extended release (ER) formulations affect persistence to metformin therapy.DesignRetrospective cohort study.SettingElectronic health record data from a network of urban academic practices.ParticipantsThe cohort was restricted to individuals receiving a metformin prescription between 2009/1/1 and 2015/9/31, under care for at least 6 months before the first prescription of metformin. The cohort was further restricted to patients with no evidence of any antihyperglycaemic agent use prior to the index date, an haemoglobin A1c measured within 1 month prior to or 1 week after the index date, at least 6 months of follow-up, and with the initial metformin prescription originating in either a general medicine or endocrinology clinic.Primary and secondary outcome measuresThe primary outcome measure was early non-persistence, as defined by the absence of further prescriptions for metformin after the first 90 days of follow-up.ResultsThe final cohort consisted of 1259 eligible individuals. The overall rate of early non-persistence was 20.3%. Initial use of ER and low starting dose metformin were associated with significantly lower rates of reported side effects and non-persistence, but after multivariable analysis, only use of low starting doses was independently associated with improved persistence (adjusted OR 0.54, 95% CI 0.37 to 0.76, for comparison of 500 mg daily dose or less to all higher doses).ConclusionsThese data support the routine prescribing of low starting doses of metformin as a tool to improve persistence. In this study setting, many providers routinely used ER metformin as an initial treatment; while this practice may have benefits, it deserves more rigorous study to assess whether increased costs are justified.


2011 ◽  
Vol 4 (0) ◽  
Author(s):  
Michael Klompas ◽  
Chaim Kirby ◽  
Jason McVetta ◽  
Paul Oppedisano ◽  
John Brownstein ◽  
...  

Author(s):  
José Carlos Ferrão ◽  
Mónica Duarte Oliveira ◽  
Daniel Gartner ◽  
Filipe Janela ◽  
Henrique M. G. Martins

Author(s):  
Jeffrey G Klann ◽  
Griffin M Weber ◽  
Hossein Estiri ◽  
Bertrand Moal ◽  
Paul Avillach ◽  
...  

Abstract Introduction The Consortium for Clinical Characterization of COVID-19 by EHR (4CE) is an international collaboration addressing COVID-19 with federated analyses of electronic health record (EHR) data. Objective We sought to develop and validate a computable phenotype for COVID-19 severity. Methods Twelve 4CE sites participated. First we developed an EHR-based severity phenotype consisting of six code classes, and we validated it on patient hospitalization data from the 12 4CE clinical sites against the outcomes of ICU admission and/or death. We also piloted an alternative machine-learning approach and compared selected predictors of severity to the 4CE phenotype at one site. Results The full 4CE severity phenotype had pooled sensitivity of 0.73 and specificity 0.83 for the combined outcome of ICU admission and/or death. The sensitivity of individual code categories for acuity had high variability - up to 0.65 across sites. At one pilot site, the expert-derived phenotype had mean AUC 0.903 (95% CI: 0.886, 0.921), compared to AUC 0.956 (95% CI: 0.952, 0.959) for the machine-learning approach. Billing codes were poor proxies of ICU admission, with as low as 49% precision and recall compared to chart review. Discussion We developed a severity phenotype using 6 code classes that proved resilient to coding variability across international institutions. In contrast, machine-learning approaches may overfit hospital-specific orders. Manual chart review revealed discrepancies even in the gold-standard outcomes, possibly due to heterogeneous pandemic conditions. Conclusion We developed an EHR-based severity phenotype for COVID-19 in hospitalized patients and validated it at 12 international sites.


2020 ◽  
Vol 41 (S1) ◽  
pp. s39-s39
Author(s):  
Pontus Naucler ◽  
Suzanne D. van der Werff ◽  
John Valik ◽  
Logan Ward ◽  
Anders Ternhag ◽  
...  

Background: Healthcare-associated infection (HAI) surveillance is essential for most infection prevention programs and continuous epidemiological data can be used to inform healthcare personal, allocate resources, and evaluate interventions to prevent HAIs. Many HAI surveillance systems today are based on time-consuming and resource-intensive manual reviews of patient records. The objective of HAI-proactive, a Swedish triple-helix innovation project, is to develop and implement a fully automated HAI surveillance system based on electronic health record data. Furthermore, the project aims to develop machine-learning–based screening algorithms for early prediction of HAI at the individual patient level. Methods: The project is performed with support from Sweden’s Innovation Agency in collaboration among academic, health, and industry partners. Development of rule-based and machine-learning algorithms is performed within a research database, which consists of all electronic health record data from patients admitted to the Karolinska University Hospital. Natural language processing is used for processing free-text medical notes. To validate algorithm performance, manual annotation was performed based on international HAI definitions from the European Center for Disease Prevention and Control, Centers for Disease Control and Prevention, and Sepsis-3 criteria. Currently, the project is building a platform for real-time data access to implement the algorithms within Region Stockholm. Results: The project has developed a rule-based surveillance algorithm for sepsis that continuously monitors patients admitted to the hospital, with a sensitivity of 0.89 (95% CI, 0.85–0.93), a specificity of 0.99 (0.98–0.99), a positive predictive value of 0.88 (0.83–0.93), and a negative predictive value of 0.99 (0.98–0.99). The healthcare-associated urinary tract infection surveillance algorithm, which is based on free-text analysis and negations to define symptoms, had a sensitivity of 0.73 (0.66–0.80) and a positive predictive value of 0.68 (0.61–0.75). The sensitivity and positive predictive value of an algorithm based on significant bacterial growth in urine culture only was 0.99 (0.97–1.00) and 0.39 (0.34–0.44), respectively. The surveillance system detected differences in incidences between hospital wards and over time. Development of surveillance algorithms for pneumonia, catheter-related infections and Clostridioides difficile infections, as well as machine-learning–based models for early prediction, is ongoing. We intend to present results from all algorithms. Conclusions: With access to electronic health record data, we have shown that it is feasible to develop a fully automated HAI surveillance system based on algorithms using both structured data and free text for the main healthcare-associated infections.Funding: Sweden’s Innovation Agency and Stockholm County CouncilDisclosures: None


Sign in / Sign up

Export Citation Format

Share Document