Concordance Between Electronic Health Record and Tumor Registry Documentation of Smoking Status Among Patients With Cancer

2021 ◽  
pp. 518-526
Author(s):  
Jennifer H. LeLaurin ◽  
Matthew J. Gurka ◽  
Xiaofei Chi ◽  
Ji-Hyun Lee ◽  
Jaclyn Hall ◽  
...  

PURPOSE Patients with cancer who use tobacco experience reduced treatment effectiveness, increased risk of recurrence and mortality, and diminished quality of life. Accurate tobacco use documentation for patients with cancer is necessary for appropriate clinical decision making and cancer outcomes research. Our aim was to assess agreement between electronic health record (EHR) smoking status data and cancer registry data. MATERIALS AND METHODS We identified all patients with cancer seen at University of Florida Health from 2015 to 2018. Structured EHR smoking status was compared with the tumor registry smoking status for each patient. Sensitivity, specificity, positive predictive values, negative predictive values, and Kappa statistics were calculated. We used logistic regression to determine if patient characteristics were associated with odds of agreement in smoking status between EHR and registry data. RESULTS We analyzed 11,110 patient records. EHR smoking status was documented for nearly all (98%) patients. Overall kappa (0.78; 95% CI, 0.77 to 0.79) indicated moderate agreement between the registry and EHR. The sensitivity was 0.82 (95% CI, 0.81 to 0.84), and the specificity was 0.97 (95% CI, 0.96 to 0.97). The logistic regression results indicated that agreement was more likely among patients who were older and female and if the EHR documentation occurred closer to the date of cancer diagnosis. CONCLUSION Although documentation of smoking status for patients with cancer is standard practice, we only found moderate agreement between EHR and tumor registry data. Interventions and research using EHR data should prioritize ensuring the validity of smoking status data. Multilevel strategies are needed to achieve consistent and accurate documentation of smoking status in cancer care.

2021 ◽  
Vol 1 (1) ◽  
pp. 6-17
Author(s):  
Andrija Pavlovic ◽  
Nina Rajovic ◽  
Jasmina Pavlovic Stojanovic ◽  
Debora Akinyombo ◽  
Milica Ugljesic ◽  
...  

Introduction: Potential benefits of implementing an electronic health record (EHR) to increase the efficiency of health services and improve the quality of health care are often obstructed by the unwillingness of the users themselves to accept and use the available systems. Aim: The aim of this study was to identify factors that influence the acceptance of the use of an EHR by physicians in the daily practice of hospital health care. Material and Methods: The cross-sectional study was conducted among physicians in the General Hospital Pancevo, Serbia. An anonymous questionnaire, developed according to the technology acceptance model (TAM), was used for the assessment of EHR acceptance. The response rate was 91%. Internal consistency was assessed by Cronbach’s alpha coefficient. A logistic regression analysis was used to identify the factors influencing the acceptance of the use of EHR. Results: The study population included 156 physicians. The mean age was 46.4 ± 10.4 years, 58.8% participants were female. Half of the respondents (50.1%) supported the use of EHR in comparison to paper patient records. In multivariate logistic regression modeling of social and technical factors, ease of use, usefulness, and attitudes towards use of EHR as determinants of the EHR acceptance, the following predictors were identified: use of a computer outside of the office for reading daily newspapers (p = 0.005), EHR providing a greater amount of valuable information (p = 0.007), improvement in the productivity by EHR use (p < 0.001), and a statement that using EHR is a good idea (p = 0.014). Overall the percentage of correct classifications in the model was 83.9%. Conclusion: In this research, determinants of the EHR acceptance were assessed in accordance with the TAM, providing an overall good model fit. Future research should attempt to add other constructs to the TAM in order to fully identify all determinants of physician acceptance of EHR in the complex environment of different health systems.


10.2196/28501 ◽  
2021 ◽  
Vol 8 (3) ◽  
pp. e28501
Author(s):  
Rong Yin ◽  
Katherine Law ◽  
David Neyens

Background Electronic health record (EHR) patient portals are designed to provide medical health records to patients. Using an EHR portal is expected to contribute to positive health outcomes and facilitate patient-provider communication. Objective Our objective was to examine how portal users report using their portals and the factors associated with obtaining health information from the internet. We also examined the desired portal features, factors impacting users’ trust in portals, and barriers to using portals. Methods An internet-based survey study was conducted using Amazon Mechanical Turk. All the participants were adults in the United States who used patient portals. The survey included questions about how the participants used their portals, what factors acted as barriers to using their portals, and how they used and how much they trusted other web-based health information sources as well as their portals. A logistic regression model was used to examine the factors influencing the participants’ trust in their portals. Additionally, the desired features and design characteristics were identified to support the design of future portals. Results A total of 394 participants completed the survey. Most of the participants were less than 35 years old (212/394, 53.8%), with 36.3% (143/394) aged between 35 and 55 years, and 9.9% (39/394) aged above 55 years. Women accounted for 48.5% (191/394) of the survey participants. More than 78% (307/394) of the participants reported using portals at least monthly. The most common portal features used were viewing lab results, making appointments, and paying bills. Participants reported some barriers to portal use including data security and limited access to the internet. The results of a logistic regression model used to predict the trust in their portals suggest that those comfortable using their portals (odds ratio [OR] 7.97, 95% CI 1.11-57.32) thought that their portals were easy to use (OR 7.4, 95% CI 1.12-48.84), and frequent internet users (OR 43.72, 95% CI 1.83-1046.43) were more likely to trust their portals. Participants reporting that the portals were important in managing their health (OR 28.13, 95% CI 5.31-148.85) and that their portals were a valuable part of their health care (OR 6.75, 95% CI 1.51-30.11) were also more likely to trust their portals. Conclusions There are several factors that impact the trust of EHR patient portal users in their portals. Designing easily usable portals and considering these factors may be the most effective approach to improving trust in patient portals. The desired features and usability of portals are critical factors that contribute to users’ trust in EHR portals.


2021 ◽  
Author(s):  
Rong Yin ◽  
Katherine Law ◽  
David Neyens

BACKGROUND Electronic health record (EHR) patient portals are designed to provide medical health records to patients. Using an EHR portal is expected to contribute to positive health outcomes and facilitate patient-provider communication. OBJECTIVE Our objective was to examine how portal users report using their portals and the factors associated with obtaining health information from the internet. We also examined the desired portal features, factors impacting users’ trust in portals, and barriers to using portals. METHODS An internet-based survey study was conducted using Amazon Mechanical Turk. All the participants were adults in the United States who used patient portals. The survey included questions about how the participants used their portals, what factors acted as barriers to using their portals, and how they used and how much they trusted other web-based health information sources as well as their portals. A logistic regression model was used to examine the factors influencing the participants’ trust in their portals. Additionally, the desired features and design characteristics were identified to support the design of future portals. RESULTS A total of 394 participants completed the survey. Most of the participants were less than 35 years old (212/394, 53.8%), with 36.3% (143/394) aged between 35 and 55 years, and 9.9% (39/394) aged above 55 years. Women accounted for 48.5% (191/394) of the survey participants. More than 78% (307/394) of the participants reported using portals at least monthly. The most common portal features used were viewing lab results, making appointments, and paying bills. Participants reported some barriers to portal use including data security and limited access to the internet. The results of a logistic regression model used to predict the trust in their portals suggest that those comfortable using their portals (odds ratio [OR] 7.97, 95% CI 1.11-57.32) thought that their portals were easy to use (OR 7.4, 95% CI 1.12-48.84), and frequent internet users (OR 43.72, 95% CI 1.83-1046.43) were more likely to trust their portals. Participants reporting that the portals were important in managing their health (OR 28.13, 95% CI 5.31-148.85) and that their portals were a valuable part of their health care (OR 6.75, 95% CI 1.51-30.11) were also more likely to trust their portals. CONCLUSIONS There are several factors that impact the trust of EHR patient portal users in their portals. Designing easily usable portals and considering these factors may be the most effective approach to improving trust in patient portals. The desired features and usability of portals are critical factors that contribute to users’ trust in EHR portals. CLINICALTRIAL


2021 ◽  
Author(s):  
Talia Roshini Lester ◽  
Yair Bannett ◽  
Rebecca M. Gardner ◽  
Heidi M. Feldman ◽  
Lynne C. Huffman

Objectives: To describe medication management of children diagnosed with anxiety and depression by primary care providers. Study Design/Methods: We performed a retrospective cross-sectional analysis of electronic health record (EHR) structured data. All visits for pediatric patients seen at least twice during a four-year period within a network of primary care clinics in Northern California were included. Descriptive statistics summarized patient variables and most commonly prescribed medications. For each subcohort (anxiety, depression, and both (anxiety+depression)), logistic regression models examined the variables associated with medication prescription. Results: Of all patients (N=93,025), 2.8% (n=2635) had a diagnosis of anxiety only, 1.5% (n=1433) depression only, and 0.79% (n=737) both anxiety and depression (anxiety+depression); 18% of children with anxiety and/or depression had comorbid ADHD. A total of 14.0% with anxiety (n=370), 20.3% with depression (n=291), and 47.5% with anxiety+depression (n=350) received a psychoactive non-stimulant medication. For anxiety only and depression only, sertraline, citalopram, and fluoxetine were most commonly prescribed. For anxiety+depression, citalopram, sertraline, and escitalopram were most commonly prescribed. The top prescribed medications also included benzodiazepines. Logistic regression models showed that older age and having developmental or mental health comorbidities were independently associated with increased likelihood of medication prescription for children with anxiety, depression, and anxiety+depression. Insurance type and sex were not associated with medication prescription. Conclusions: PCPs prescribe medications more frequently for patients with anxiety+depression than for patients with either diagnosis alone. Medication choices generally align with current recommendations. Future research should focus on the use of benzodiazepines due to safety concerns in children.


Kidney360 ◽  
2020 ◽  
Vol 1 (8) ◽  
pp. 731-739 ◽  
Author(s):  
Kinsuk Chauhan ◽  
Girish N. Nadkarni ◽  
Fergus Fleming ◽  
James McCullough ◽  
Cijiang J. He ◽  
...  

BackgroundIndividuals with type 2 diabetes (T2D) or the apolipoprotein L1 high-risk (APOL1-HR) genotypes are at increased risk of rapid kidney function decline (RKFD) and kidney failure. We hypothesized that a prognostic test using machine learning integrating blood biomarkers and longitudinal electronic health record (EHR) data would improve risk stratification.MethodsWe selected two cohorts from the Mount Sinai BioMe Biobank: T2D (n=871) and African ancestry with APOL1-HR (n=498). We measured plasma tumor necrosis factor receptors (TNFR) 1 and 2 and kidney injury molecule-1 (KIM-1) and used random forest algorithms to integrate biomarker and EHR data to generate a risk score for a composite outcome: RKFD (eGFR decline of ≥5 ml/min per year), or 40% sustained eGFR decline, or kidney failure. We compared performance to a validated clinical model and applied thresholds to assess the utility of the prognostic test (KidneyIntelX) to accurately stratify patients into risk categories.ResultsOverall, 23% of those with T2D and 18% of those with APOL1-HR experienced the composite kidney end point over a median follow-up of 4.6 and 5.9 years, respectively. The area under the receiver operator characteristic curve (AUC) of KidneyIntelX was 0.77 (95% CI, 0.75 to 0.79) in T2D, and 0.80 (95% CI, 0.77 to 0.83) in APOL1-HR, outperforming the clinical models (AUC, 0.66 [95% CI, 0.65 to 0.67] and 0.72 [95% CI, 0.71 to 0.73], respectively; P<0.001). The positive predictive values for KidneyIntelX were 62% and 62% versus 46% and 39% for the clinical models (P<0.01) in high-risk (top 15%) stratum for T2D and APOL1-HR, respectively. The negative predictive values for KidneyIntelX were 92% in T2D and 96% for APOL1-HR versus 85% and 93% for the clinical model, respectively (P=0.76 and 0.93, respectively), in low-risk stratum (bottom 50%).ConclusionsIn patients with T2D or APOL1-HR, a prognostic test (KidneyIntelX) integrating biomarker levels with longitudinal EHR data significantly improved prediction of a composite kidney end point of RKFD, 40% decline in eGFR, or kidney failure over validated clinical models.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Mark Sonderman ◽  
Eric Farber-Eger ◽  
Aaron W Aday ◽  
Matthew S Freiberg ◽  
Joshua A Beckman ◽  
...  

Introduction: Peripheral arterial disease (PAD) is a common and underdiagnosed disease associated with significant morbidity and increased risk of major adverse cardiovascular events. Targeted screening of individuals at high risk for PAD could facilitate early diagnosis and allow for prompt initiation of interventions aimed at reducing cardiovascular and limb events. However, no widely accepted PAD risk stratification tools exist. Hypothesis: We hypothesized that machine learning algorithms can identify patients at high risk for PAD, defined by ankle-brachial index (ABI) <0.9, from electronic health record (EHR) data. Methods: Using data from the Vanderbilt University Medical Center EHR, ABIs were extracted for 8,093 patients not previously diagnosed with PAD at the time of initial testing. A total of 76 patient characteristics, including demographics, vital signs, lab values, diagnoses, and medications were analyzed using both a random forest and least absolute shrinkage and selection operator (LASSO) regression to identify features most predictive of ABI <0.9. The most significant features were used to build a logistic regression based predictor that was validated in a separate group of individuals with ABI data. Results: The machine learning models identified several features independently correlated with PAD (age, BMI, SBP, DBP, pulse pressure, anti-hypertensive medication, diabetes medication, smoking, and statin use). The test statistic produced by the logistic regression model was correlated with PAD status in our validation set. At a chosen threshold, the specificity was 0.92 and the positive predictive value was 0.73 in this high-risk population. Conclusions: Machine learning can be applied to build unbiased models that identify individuals at risk for PAD using easily accessible information from the EHR. This model can be implemented either through a high-risk flag within the medical record or an online calculator available to clinicians.


2018 ◽  
Vol 57 (05/06) ◽  
pp. e3-e3
Author(s):  
Ashimiyu Durojaiye ◽  
Lisa Puett ◽  
Scott Levin ◽  
Matthew Toerper ◽  
Nicolette McGeorge ◽  
...  

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Andrew Bishara ◽  
Catherine Chiu ◽  
Elizabeth L. Whitlock ◽  
Vanja C. Douglas ◽  
Sei Lee ◽  
...  

Abstract Background Accurate, pragmatic risk stratification for postoperative delirium (POD) is necessary to target preventative resources toward high-risk patients. Machine learning (ML) offers a novel approach to leveraging electronic health record (EHR) data for POD prediction. We sought to develop and internally validate a ML-derived POD risk prediction model using preoperative risk features, and to compare its performance to models developed with traditional logistic regression. Methods This was a retrospective analysis of preoperative EHR data from 24,885 adults undergoing a procedure requiring anesthesia care, recovering in the main post-anesthesia care unit, and staying in the hospital at least overnight between December 2016 and December 2019 at either of two hospitals in a tertiary care health system. One hundred fifteen preoperative risk features including demographics, comorbidities, nursing assessments, surgery type, and other preoperative EHR data were used to predict postoperative delirium (POD), defined as any instance of Nursing Delirium Screening Scale ≥2 or positive Confusion Assessment Method for the Intensive Care Unit within the first 7 postoperative days. Two ML models (Neural Network and XGBoost), two traditional logistic regression models (“clinician-guided” and “ML hybrid”), and a previously described delirium risk stratification tool (AWOL-S) were evaluated using the area under the receiver operating characteristic curve (AUC-ROC), sensitivity, specificity, positive likelihood ratio, and positive predictive value. Model calibration was assessed with a calibration curve. Patients with no POD assessments charted or at least 20% of input variables missing were excluded. Results POD incidence was 5.3%. The AUC-ROC for Neural Net was 0.841 [95% CI 0. 816–0.863] and for XGBoost was 0.851 [95% CI 0.827–0.874], which was significantly better than the clinician-guided (AUC-ROC 0.763 [0.734–0.793], p < 0.001) and ML hybrid (AUC-ROC 0.824 [0.800–0.849], p < 0.001) regression models and AWOL-S (AUC-ROC 0.762 [95% CI 0.713–0.812], p < 0.001). Neural Net, XGBoost, and ML hybrid models demonstrated excellent calibration, while calibration of the clinician-guided and AWOL-S models was moderate; they tended to overestimate delirium risk in those already at highest risk. Conclusion Using pragmatically collected EHR data, two ML models predicted POD in a broad perioperative population with high discrimination. Optimal application of the models would provide automated, real-time delirium risk stratification to improve perioperative management of surgical patients at risk for POD.


2011 ◽  
Vol 32 (4) ◽  
pp. 351-359 ◽  
Author(s):  
Maria C. S. Inacio ◽  
Elizabeth W. Paxton ◽  
Yuexin Chen ◽  
Jessica Harris ◽  
Enid Eck ◽  
...  

Objective.TO evaluate whether a hybrid electronic screening algorithm using a total joint replacement (TJR) registry, electronic surgical site infection (SSI) screening, and electronic health record (EHR) review of SSI is sensitive and specific for SSI detection and reduces chart review volume for SSI surveillance.Design.Validation study.Setting.A large health maintenance organization (HMO) with 8.6 million members.Methods.Using codes for infection, wound complications, cellullitis, procedures related to infections, and surgeon-reported complications from the International Classification of Diseases, Ninth Revision, Clinical Modification, we screened each TJR procedure performed in our HMO between January 2006 and December 2008 for possible infections. Flagged charts were reviewed by clinical-content experts to confirm SSIs. SSIs identified by the electronic screening algorithm were compared with SSIs identified by the traditional indirect surveillance methodology currently employed in our HMO. Positive predictive values (PPVs), negative predictive values (NPVs), and specificity and sensitivity values were calculated. Absolute reduction of chart review volume was evaluated.Results.The algorithm identified 4,001 possible SSIs (9.5%) for the 42,173 procedures performed for our TJR patient population. A total of 440 case patients (1.04%) had SSIs (PPV, 11.0%; NPV, 100.0%). The sensitivity and specificity of the overall algorithm were 97.8% and 91.5%, respectively.Conclusion.An electronic screening algorithm combined with an electronic health record review of flagged cases can be used as a valid source for TJR SSI surveillance. The algorithm successfully reduced the volume of chart review for surveillance by 90.5%.


2018 ◽  
Vol 57 (05/06) ◽  
pp. 261-269 ◽  
Author(s):  
Ashimiyu Durojaiye ◽  
Lisa Puett ◽  
Scott Levin ◽  
Matthew Toerper ◽  
Nicolette McGeorge ◽  
...  

Background Electronic health record (EHR) systems contain large volumes of novel heterogeneous data that can be linked to trauma registry data to enable innovative research not possible with either data source alone. Objective This article describes an approach for linking electronically extracted EHR data to trauma registry data at the institutional level and assesses the value of probabilistic linkage. Methods Encounter data were independently obtained from the EHR data warehouse (n = 1,632) and the pediatric trauma registry (n = 1,829) at a Level I pediatric trauma center. Deterministic linkage was attempted using nine different combinations of medical record number (MRN), encounter identity (ID) (visit ID), age, gender, and emergency department (ED) arrival date. True matches from the best performing variable combination were used to create a gold standard, which was used to evaluate the performance of each variable combination, and to train a probabilistic algorithm that was separately used to link records unmatched by deterministic linkage and the entire cohort. Additional records that matched probabilistically were investigated via chart review and compared against records that matched deterministically. Results Deterministic linkage with exact matching on any three of MRN, encounter ID, age, gender, and ED arrival date gave the best yield of 1,276 true matches while an additional probabilistic linkage step following deterministic linkage yielded 110 true matches. These records contained a significantly higher number of boys compared to records that matched deterministically and etiology was attributable to mismatch between MRNs in the two data sets. Probabilistic linkage of the entire cohort yielded 1,363 true matches. Conclusion The combination of deterministic and an additional probabilistic method represents a robust approach for linking EHR data to trauma registry data. This approach may be generalizable to studies involving other registries and databases.


Sign in / Sign up

Export Citation Format

Share Document