scholarly journals Measuring Adoption of Patient Priorities-Aligned Care Using Natural Language Processing

2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 183-183
Author(s):  
Javad Razjouyan ◽  
Jennifer Freytag ◽  
Edward Odom ◽  
Lilian Dindo ◽  
Aanand Naik

Abstract Patient Priorities Care (PPC) is a model of care that aligns health care recommendations with priorities of older adults with multiple chronic conditions. Social workers (SW), after online training, document PPC in the patient’s electronic health record (EHR). Our goal is to identify free-text notes with PPC language using a natural language processing (NLP) model and to measure PPC adoption and effect on long term services and support (LTSS) use. Free-text notes from the EHR produced by trained SWs passed through a hybrid NLP model that utilized rule-based and statistical machine learning. NLP accuracy was validated against chart review. Patients who received PPC were propensity matched with patients not receiving PPC (control) on age, gender, BMI, Charlson comorbidity index, facility and SW. The change in LTSS utilization 6-month intervals were compared by groups with univariate analysis. Chart review indicated that 491 notes out of 689 had PPC language and the NLP model reached to precision of 0.85, a recall of 0.90, an F1 of 0.87, and an accuracy of 0.91. Within group analysis shows that intervention group used LTSS 1.8 times more in the 6 months after the encounter compared to 6 months prior. Between group analysis shows that intervention group has significant higher number of LTSS utilization (p=0.012). An automated NLP model can be used to reliably measure the adaptation of PPC by SW. PPC seems to encourage use of LTSS that may delay time to long term care placement.

2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S72-S72
Author(s):  
Brian R Lee ◽  
Alaina Linafelter ◽  
Alaina Burns ◽  
Allison Burris ◽  
Heather Jones ◽  
...  

Abstract Background Acute pharyngitis is one of the most common causes of pediatric health care visits, accounting for approximately 12 million ambulatory care visits each year. Rapid antigen detection tests (RADTs) for Group A Streptococcus (GAS) are one of the most commonly ordered tests in the ambulatory settings. Approximately 40–60% of RADTs are estimated to be inappropriate. Determination of inappropriate RADT frequently requires time-intensive chart reviews. The purpose of this study was to determine if natural language processing (NLP) can provide an accurate and automated alternative for assessing RADT inappropriateness. Methods Patients ≥ 3 years of age who received an RADT while evaluated in our EDs/UCCs between April 2018 and September 2018 were identified. A manual chart review was completed on a 10% random sample to determine the presence of sore throat or viral symptoms (i.e., conjunctivitis, rhinorrhea, cough, diarrhea, hoarse voice, and viral exanthema). Inappropriate RADT was defined as either absence of sore throat or reporting 2 or more viral symptoms. An NLP algorithm was developed independently to assign the presence/absence of symptoms and RADT inappropriateness. The NLP sensitivity/specificity was calculated using the manual chart review sample as the gold standard. Results Manual chart review was completed on 720 patients, of which 320 (44.4%) were considered to have an inappropriate RADT. When compared to the manual review, the NLP approach showed high sensitivity (se) and specificity (sp) when assigning inappropriateness (88.4% and 90.0%, respectively). Optimal sensitivity/specificity was also observed for select symptoms, including sore throat (se: 92.9%, sp: 92.5%), cough (se: 94.5%, sp: 96.5%), and rhinorrhea (se: 86.1%, sp: 95.3%). The prevalence of clinical symptoms was similar when running NLP on subsequent, independent validation sets. After validating the NLP algorithm, a long term monthly trend report was developed. Figure Inappropriate GAS RADTs Determined by NLP, June 2018-May 2020 Conclusion An NLP algorithm can accurately identify inappropriate RADT when compared to a gold standard. Manual chart review requires dozens of hours to complete. In contrast, NLP requires only a couple of minutes and offers the potential to calculate valid metrics that are easily scaled-up to help monitor comprehensive, long-term trends. Disclosures Brian R. Lee, MPH, PhD, Merck (Grant/Research Support)


Author(s):  
Liat Ayalon ◽  
Sagit Lev ◽  
Gil Lev

Abstract Objectives We thematically classified all titles of eight top psychological and social gerontology journals over a period of six decades, between 1961 and February 2020. This was done in order to provide a broad overview of the main topics that interest the scientific community over time and place. Method We used natural language processing in order to analyze the data. In order to capture the diverse thematic clusters covered by the journals, a cluster analysis, based on “topic detection” was conducted. Results A total of 15,566 titles were classified into 38 thematic clusters. These clusters were then compared over time and geographic location. The majority of titles fell into a relatively small number of thematic clusters and a large number of thematic clusters were hardly addressed. The most frequently addressed thematic clusters were (a) Cognitive functioning, (b) Long-term care and formal care, (c) Emotional and personality functioning, (d) health, and (e) Family and informal care. The least frequently addressed thematic clusters were (a) Volunteering, (b) Sleep, (c) Addictions, (d) Suicide, and (e) Nutrition. There was limited variability over time and place with regard to the most frequently addressed themes. Discussion Despite our focus on journals that specifically address psychological and social aspects of gerontology, the biomedicalization of the field is evident. The somewhat limited variability of themes over time and place is disconcerting as it potentially attests to slow progress and limited attention to contextual/societal variations.


2020 ◽  
Author(s):  
Javad Razjouyan ◽  
Jennifer Freytag ◽  
Lilian Dindo ◽  
Lea Kiefer ◽  
Edward Odom ◽  
...  

BACKGROUND Patient Priorities Care (PPC) is a model of care that aligns health care recommendations with priorities of older adults with multiple chronic conditions. Following identification of patient priorities, this information is documented in the patient’s electronic health record (EHR). OBJECTIVE Our goal is to develop and validate a natural language processing (NLP) model that reliably documents when clinicians identify patient priorities (i.e., values, outcome goals and care preferences) within the EHR as a measure of PPC adoption. METHODS Design: Retrospective analysis of unstructured EHR free-text notes using an NLP model. Setting: National Veteran Health Administration (VHA) EHR. Participants (including the sample size): 778 patient notes of 658 patients from encounters with 144 social workers in the primary care setting Measurements: Each patient’s free-text clinical note was reviewed by two independent reviewers for the presence of PPC language such as priorities, values, and goals. We developed a NLP model that utilized statistical machine learning approaches. The performance of the NLP model in training and validation with 10-fold cross-validation is reported via accuracy, recall, and precision in comparison to the chart review. RESULTS Results: Out of 778 notes, 589 (76%) were identified as containing PPC language (Kappa = 0.82, p-value < 0.001). The NLP model in the training stage had an accuracy of 0.98 (0.98, 0.99), a recall of 0.98 (0.98, 0.99), and precision of 0.98 (0.97, 1.00). The NLP model in the validation stage has an accuracy of 0.92 (0.90, 0.94), a recall of 0.84 (0.79, 0.89), and precision of 0.84 (0.77, 0.91). In contrast, an approach using simple search terms for PPC only had a precision of 0.757. CONCLUSIONS Discussion and Implications: An automated NLP model can reliably measure with high precision, recall, and accuracy when clinicians document patient priorities as a key step in the adoption of PPC. CLINICALTRIAL


Author(s):  
Mario Jojoa Acosta ◽  
Gema Castillo-Sánchez ◽  
Begonya Garcia-Zapirain ◽  
Isabel de la Torre Díez ◽  
Manuel Franco-Martín

The use of artificial intelligence in health care has grown quickly. In this sense, we present our work related to the application of Natural Language Processing techniques, as a tool to analyze the sentiment perception of users who answered two questions from the CSQ-8 questionnaires with raw Spanish free-text. Their responses are related to mindfulness, which is a novel technique used to control stress and anxiety caused by different factors in daily life. As such, we proposed an online course where this method was applied in order to improve the quality of life of health care professionals in COVID 19 pandemic times. We also carried out an evaluation of the satisfaction level of the participants involved, with a view to establishing strategies to improve future experiences. To automatically perform this task, we used Natural Language Processing (NLP) models such as swivel embedding, neural networks, and transfer learning, so as to classify the inputs into the following three categories: negative, neutral, and positive. Due to the limited amount of data available—86 registers for the first and 68 for the second—transfer learning techniques were required. The length of the text had no limit from the user’s standpoint, and our approach attained a maximum accuracy of 93.02% and 90.53%, respectively, based on ground truth labeled by three experts. Finally, we proposed a complementary analysis, using computer graphic text representation based on word frequency, to help researchers identify relevant information about the opinions with an objective approach to sentiment. The main conclusion drawn from this work is that the application of NLP techniques in small amounts of data using transfer learning is able to obtain enough accuracy in sentiment analysis and text classification stages.


2021 ◽  
Vol 28 (1) ◽  
pp. e100262
Author(s):  
Mustafa Khanbhai ◽  
Patrick Anyadi ◽  
Joshua Symons ◽  
Kelsey Flott ◽  
Ara Darzi ◽  
...  

ObjectivesUnstructured free-text patient feedback contains rich information, and analysing these data manually would require a lot of personnel resources which are not available in most healthcare organisations.To undertake a systematic review of the literature on the use of natural language processing (NLP) and machine learning (ML) to process and analyse free-text patient experience data.MethodsDatabases were systematically searched to identify articles published between January 2000 and December 2019 examining NLP to analyse free-text patient feedback. Due to the heterogeneous nature of the studies, a narrative synthesis was deemed most appropriate. Data related to the study purpose, corpus, methodology, performance metrics and indicators of quality were recorded.ResultsNineteen articles were included. The majority (80%) of studies applied language analysis techniques on patient feedback from social media sites (unsolicited) followed by structured surveys (solicited). Supervised learning was frequently used (n=9), followed by unsupervised (n=6) and semisupervised (n=3). Comments extracted from social media were analysed using an unsupervised approach, and free-text comments held within structured surveys were analysed using a supervised approach. Reported performance metrics included the precision, recall and F-measure, with support vector machine and Naïve Bayes being the best performing ML classifiers.ConclusionNLP and ML have emerged as an important tool for processing unstructured free text. Both supervised and unsupervised approaches have their role depending on the data source. With the advancement of data analysis tools, these techniques may be useful to healthcare organisations to generate insight from the volumes of unstructured free-text data.


2021 ◽  
pp. 1063293X2098297
Author(s):  
Ivar Örn Arnarsson ◽  
Otto Frost ◽  
Emil Gustavsson ◽  
Mats Jirstrand ◽  
Johan Malmqvist

Product development companies collect data in form of Engineering Change Requests for logged design issues, tests, and product iterations. These documents are rich in unstructured data (e.g. free text). Previous research affirms that product developers find that current IT systems lack capabilities to accurately retrieve relevant documents with unstructured data. In this research, we demonstrate a method using Natural Language Processing and document clustering algorithms to find structurally or contextually related documents from databases containing Engineering Change Request documents. The aim is to radically decrease the time needed to effectively search for related engineering documents, organize search results, and create labeled clusters from these documents by utilizing Natural Language Processing algorithms. A domain knowledge expert at the case company evaluated the results and confirmed that the algorithms we applied managed to find relevant document clusters given the queries tested.


2021 ◽  
Vol 27 ◽  
pp. 107602962110131
Author(s):  
Bela Woller ◽  
Austin Daw ◽  
Valerie Aston ◽  
Jim Lloyd ◽  
Greg Snow ◽  
...  

Real-time identification of venous thromboembolism (VTE), defined as deep vein thrombosis (DVT) and pulmonary embolism (PE), can inform a healthcare organization’s understanding of these events and be used to improve care. In a former publication, we reported the performance of an electronic medical record (EMR) interrogation tool that employs natural language processing (NLP) of imaging studies for the diagnosis of venous thromboembolism. Because we transitioned from the legacy electronic medical record to the Cerner product, iCentra, we now report the operating characteristics of the NLP EMR interrogation tool in the new EMR environment. Two hundred randomly selected patient encounters for which the imaging report assessed by NLP that revealed VTE was present were reviewed. These included one hundred imaging studies for which PE was identified. These included computed tomography pulmonary angiography—CTPA, ventilation perfusion—V/Q scan, and CT angiography of the chest/ abdomen/pelvis. One hundred randomly selected comprehensive ultrasound (CUS) that identified DVT were also obtained. For comparison, one hundred patient encounters in which PE was suspected and imaging was negative for PE (CTPA or V/Q) and 100 cases of suspected DVT with negative CUS as reported by NLP were also selected. Manual chart review of the 400 charts was performed and we report the sensitivity, specificity, positive and negative predictive values of NLP compared with manual chart review. NLP and manual review agreed on the presence of PE in 99 of 100 cases, the presence of DVT in 96 of 100 cases, the absence of PE in 99 of 100 cases and the absence of DVT in all 100 cases. When compared with manual chart review, NLP interrogation of CUS, CTPA, CT angiography of the chest, and V/Q scan yielded a sensitivity = 93.3%, specificity = 99.6%, positive predictive value = 97.1%, and negative predictive value = 99%.


2015 ◽  
Vol 54 (04) ◽  
pp. 338-345 ◽  
Author(s):  
A. Fong ◽  
R. Ratwani

SummaryObjective: Patient safety event data repositories have the potential to dramatically improve safety if analyzed and leveraged appropriately. These safety event reports often consist of both structured data, such as general event type categories, and unstructured data, such as free text descriptions of the event. Analyzing these data, particularly the rich free text narratives, can be challenging, especially with tens of thousands of reports. To overcome the resource intensive manual review process of the free text descriptions, we demonstrate the effectiveness of using an unsupervised natural language processing approach.Methods: An unsupervised natural language processing technique, called topic modeling, was applied to a large repository of patient safety event data to identify topics, or themes, from the free text descriptions of the data. Entropy measures were used to evaluate and compare these topics to the general event type categories that were originally assigned by the event reporter.Results: Measures of entropy demonstrated that some topics generated from the un-supervised modeling approach aligned with the clinical general event type categories that were originally selected by the individual entering the report. Importantly, several new latent topics emerged that were not originally identified. The new topics provide additional insights into the patient safety event data that would not otherwise easily be detected.Conclusion: The topic modeling approach provides a method to identify topics or themes that may not be immediately apparent and has the potential to allow for automatic reclassification of events that are ambiguously classified by the event reporter.


2017 ◽  
Vol 9 (1) ◽  
Author(s):  
Dino P. Rumoro ◽  
Shital C. Shah ◽  
Gillian S. Gibbs ◽  
Marilyn M. Hallock ◽  
Gordon M. Trenholme ◽  
...  

ObjectiveTo explain the utility of using an automated syndromic surveillanceprogram with advanced natural language processing (NLP) to improveclinical quality measures reporting for influenza immunization.IntroductionClinical quality measures (CQMs) are tools that help measure andtrack the quality of health care services. Measuring and reportingCQMs helps to ensure that our health care system is deliveringeffective, safe, efficient, patient-centered, equitable, and timely care.The CQM for influenza immunization measures the percentage ofpatients aged 6 months and older seen for a visit between October1 and March 31 who received (or reports previous receipt of) aninfluenza immunization. Centers for Disease Control and Preventionrecommends that everyone 6 months of age and older receive aninfluenza immunization every season, which can reduce influenza-related morbidity and mortality and hospitalizations.MethodsPatients at a large academic medical center who had a visit toan affiliated outpatient clinic during June 1 - 8, 2016 were initiallyidentified using their electronic medical record (EMR). The 2,543patients who were selected did not have documentation of influenzaimmunization in a discrete field of the EMR. All free text notes forthese patients between August 1, 2015 and March 31, 2016 wereretrieved and analyzed using the sophisticated NLP built withinGeographic Utilization of Artificial Intelligence in Real-Timefor Disease Identification and Alert Notification (GUARDIAN)– a syndromic surveillance program – to identify any mention ofinfluenza immunization. The goal was to identify additional cases thatmet the CQM measure for influenza immunization and to distinguishdocumented exceptions. The patients with influenza immunizationmentioned were further categorized by GUARDIAN NLP intoReceived, Recommended, Refused, Allergic, and Unavailable.If more than one category was applicable for a patient, they wereindependently counted in their respective categories. A descriptiveanalysis was conducted, along with manual review of a sample ofcases per each category.ResultsFor the 2,543 patients who did not have influenza immunizationdocumentation in a discrete field of the EMR, a total of 78,642 freetext notes were processed using GUARDIAN. Four hundred fiftythree (17.8%) patients had some mention of influenza immunizationwithin the notes, which could potentially be utilized to meet the CQMinfluenza immunization requirement. Twenty two percent (n=101)of patients mentioned already having received the immunizationwhile 34.7% (n=157) patients refused it during the study time frame.There were 27 patients with the mention of influenza immunization,who could not be differentiated into a specific category. The numberof patients placed into a single category of influenza immunizationwas 351 (77.5%), while 75 (16.6%) were classified into more thanone category. See Table 1.ConclusionsUsing GUARDIAN’s NLP can identify additional patients whomay meet the CQM measure for influenza immunization or whomay be exempt. This tool can be used to improve CQM reportingand improve overall influenza immunization coverage by using it toalert providers. Next steps involve further refinement of influenzaimmunization categories, automating the process of using the NLPto identify and report additional cases, as well as using the NLP forother CQMs.Table 1. Categorization of influenza immunization documentation within freetext notes of 453 patients using NLP


Sign in / Sign up

Export Citation Format

Share Document