Clinical coding support based on structured data stored in electronic health records

Author(s):  
Jose C. Ferrao ◽  
Monica D. Oliveira ◽  
Filipe Janela ◽  
Henrique M. G. Martins
2017 ◽  
Author(s):  
Brett K Beaulieu-Jones ◽  
Daniel R Lavage ◽  
John W Snyder ◽  
Jason H Moore ◽  
Sarah A Pendergrass ◽  
...  

2021 ◽  
Vol 42 (Supplement_1) ◽  
Author(s):  
C M Maciejewski ◽  
M K Krajsman ◽  
K O Ozieranski ◽  
M B Basza ◽  
M G Gawalko ◽  
...  

Abstract Background An estimate of 80% of data gathered in electronic health records is unstructured, textual information that cannot be utilized for research purposes until it is manually coded into a database. Manual coding is a both cost and time- consuming process. Natural language processing (NLP) techniques may be utilized for extraction of structured data from text. However, little is known about the accuracy of data obtained through these methods. Purpose To evaluate the possibility of employing NLP techniques in order to obtain data regarding risk factors needed for CHA2DS2VASc scale calculation and detection of antithrombotic medication prescribed in the population of atrial fibrillation (AF) patients of a cardiology ward. Methods An automatic tool for diseases and drugs recognition based on regular expressions rules was designed through cooperation of physicians and IT specialists. Records of 194 AF patients discharged from a cardiology ward were manually reviewed by a physician- annotator as a comparator for the automatic approach. Results Median CHA2DS2VASc score calculated by the automatic was 3 (IQR 2–4) versus 3 points (IQR 2–4) for the manual method (p=0.66). High agreement between CHA2DS2VASc scores calculated by both methods was present (Kendall's W=0.979; p<0.001). In terms of anticoagulant recognition, the automatic tool misqualified the drug prescribed in 4 cases. Conclusion NLP-based techniques are a promising tools for obtaining structured data for research purposes from electronic health records in polish. Tight cooperation of physicians and IT specialists is crucial for establishing accurate recognition patterns. Funding Acknowledgement Type of funding sources: None.


2018 ◽  
Vol 6 (1) ◽  
pp. e11 ◽  
Author(s):  
Brett K Beaulieu-Jones ◽  
Daniel R Lavage ◽  
John W Snyder ◽  
Jason H Moore ◽  
Sarah A Pendergrass ◽  
...  

2008 ◽  
Vol 47 (01) ◽  
pp. 8-13 ◽  
Author(s):  
T. Dostálová ◽  
P. Hanzlíček ◽  
Z. Teuberová ◽  
M. Nagy ◽  
M. Pieš ◽  
...  

Summary Objectives: To identify support of structured data entry for electronic health record application in forensic dentistry. Methods: The methods of structuring information in dentistry are described and validation of structured data entry in electronic health records for forensic dentistry is performed on several real cases with the interactive DentCross component. The connection of this component to MUDR and MUDRLite electronic health records is described. Results: The use of the electronic health record MUDRLite and the interactive DentCross component to collect dental information required by standardized Disaster Victim Identification Form by Interpol for possible victim identification is shown. Conclusions: The analysis of structured data entry for dentistry using the DentCross component connected to an electronic health record showed the practical ability of the DentCross component to deliver a real service to dental care and the ability to support the identification of a person in forensic dentistry.


2018 ◽  
Author(s):  
Max Robinson ◽  
Jennifer Hadlock ◽  
Jiyang Yu ◽  
Alireza Khatamian ◽  
Aleksandr Y. Aravkin ◽  
...  

AbstractWe present a locality-sensitive hashing strategy for summarizing semi-structured data (e.g., in JSON or XML formats) into ‘data fingerprints’: highly compressed representations which cannot recreate details in the data, yet simplify and greatly accelerate the comparison and clustering of semi-structured data by preserving similarity relationships. Computation on data fingerprints is fast: in one example involving complex simulated medical records, the average time to encode one record was 0.53 seconds, and the average pairwise comparison time was 3.75 microseconds. Both processes are trivially parallelizable.Applications include detection of duplicates, clustering and classification of semi-structured data, which support larger goals including summarizing large and complex data sets, quality assessment, and data mining. We illustrate use cases with three analyses of electronic health records (EHRs): (1) pairwise comparison of patient records, (2) analysis of cohort structure, and (3) evaluation of methods for generating simulated patient data.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Patty Noah ◽  
Chris Hackett ◽  
Leslie Pope ◽  
Kelly Buchinsky ◽  
Ashis H Tayal ◽  
...  

Introduction: Comprehensive Stroke Centers (CSC) provide a high level of care and are held to rigorous standards. Meeting these standards can be challenging in a tertiary care center where multiple services admit the stroke patient population. This also hinders the compliance in meeting these standards as well as created a delay in data abstraction due to nonuniform documentation. A Cerebrovascular Navigator (CV Navigator) is a structured data entry form developed within the electronic health records (EHR) platform to document clinically relevant data in an effort to increase compliance with CSTK measures and to decrease the time for abstraction. Method: The CV Navigator was developed by a team of physicians, nurses, abstractors and informatics personnel. The team specifically focused on CSC measures as per Joint Commission. The CV Navigator captured data required for abstraction for submission into Get with the Guidelines (GWTG) database. We compared compliance for two years, before (2015-2016) and after (2017-2018) implementation of the CV Navigator at our comprehensive stroke center. Results: During the study 5869 (2842 before, 3027 after) acute stroke patients were admitted. Utilizing the CV Navigator simplified data abstraction which decreased GWTG submission from three months to two weeks post discharge. Compliance for documenting severity measurement (CSTK-3) significantly increased from 66.7% before (n=375/562) to 87.6% after implementation of CV Navigator (n=496/566), OR =3.53 (95%CI 2.60 - 4.80), p< 0.001. Compliance with documentation of why procoagulant reversal agent was not given (CSTK-4) increased from 81.2% (n=39/48) to 94.1% (n=48/51), but the difference was not statistically significant OR =3.69 (95%CI 0.94 - 14.58), p=0.07. Conclusion: Use of structured data entry improves compliance with documentation of stroke standards and has immensely reduced abstraction times.


Sign in / Sign up

Export Citation Format

Share Document