scholarly journals 1189 QUALITATIVE ANALYSIS COMPARED WITH NATURAL LANGUAGE PROCESSING OF A PATIENT FORUM FOR IDENTIFYING PATIENT CENTERED OUTCOMES IN SLEEP APNEA

SLEEP ◽  
2017 ◽  
Vol 40 (suppl_1) ◽  
pp. A443-A444
Author(s):  
Z Harrington ◽  
JP Bakker ◽  
A Wright ◽  
S Baker-Goodwin ◽  
K Page ◽  
...  
2021 ◽  
Author(s):  
Anahita Davoudi ◽  
Hegler Tissot ◽  
Abigail Doucette ◽  
Peter E Gabriel ◽  
Ravi B. Parikh ◽  
...  

One core measure of healthcare quality set forth by the Institute of Medicine is whether care decisions match patient goals. High-quality "serious illness communication" about patient goals and prognosis is required to support patient-centered decision-making, however current methods are not sensitive enough to measure the quality of this communication or determine whether care delivered matches patient priorities. Natural language processing offers an efficient method for identification and evaluation of documented serious illness communication, which could serve as the basis for future quality metrics in oncology and other forms of serious illness. In this study, we trained NLP algorithms to identify and characterize serious illness communication with oncology patients.


2019 ◽  
Vol 18 ◽  
pp. 160940691988702 ◽  
Author(s):  
William Leeson ◽  
Adam Resnick ◽  
Daniel Alexander ◽  
John Rovers

Qualitative data-analysis methods provide thick, rich descriptions of subjects’ thoughts, feelings, and lived experiences but may be time-consuming, labor-intensive, or prone to bias. Natural language processing (NLP) is a machine learning technique from computer science that uses algorithms to analyze textual data. NLP allows processing of large amounts of data almost instantaneously. As researchers become conversant with NLP, it is becoming more frequently employed outside of computer science and shows promise as a tool to analyze qualitative data in public health. This is a proof of concept paper to evaluate the potential of NLP to analyze qualitative data. Specifically, we ask if NLP can support conventional qualitative analysis, and if so, what its role is. We compared a qualitative method of open coding with two forms of NLP, Topic Modeling, and Word2Vec to analyze transcripts from interviews conducted in rural Belize querying men about their health needs. All three methods returned a series of terms that captured ideas and concepts in subjects’ responses to interview questions. Open coding returned 5–10 words or short phrases for each question. Topic Modeling returned a series of word-probability pairs that quantified how well a word captured the topic of a response. Word2Vec returned a list of words for each interview question ordered by which words were predicted to best capture the meaning of the passage. For most interview questions, all three methods returned conceptually similar results. NLP may be a useful adjunct to qualitative analysis. NLP may be performed after data have undergone open coding as a check on the accuracy of the codes. Alternatively, researchers can perform NLP prior to open coding and use the results to guide their creation of their codebook.


2018 ◽  
Vol 99 (5) ◽  
pp. 253-258 ◽  
Author(s):  
S. P. Morozov ◽  
A. V. Vladzimirskiy ◽  
V. A. Gombolevskiy ◽  
E. S. Kuz’mina ◽  
N. V. Ledikhova

Objective.To assess the importance of natural language processing (NLP) system for quality assurance of the radiological reports.Material and methods.Multilateral analysis of chest low-dose computed tomography (LDCT) reports based on a commercially available cognitive NLP system was performed. The applicability of artificial intelligence for discrepancy identification in the report body and conclusion (quantitative analysis) and radiologist adherence to the Lung-RADS guidelines (qualitative analysis) was evaluated.Results.Quantitative analysis: in the 8.3% of cases LDCT reports contained discrepancies between text body and conclusion, i.e., lung nodule described only in body or conclusion. It carries potential risks and should be taken into account when performing a radiological study audit. Qualitative analysis: for the Lung-RADS 3 nodules, the recommended principles of patient management were used in 46%, for Lung-RADS 4A – in 42%, and for Lung-RADS 4B – in 49% of cases.Conclusion.The consistency of NLP system within the framework of radiological study audit was 95–96%. The system is applicable for the radiological study audit, i.e. large-scale automated analysis of radiological reports and other medical documents.


2017 ◽  
Author(s):  
Timothy C Guetterman ◽  
Tammy Chang ◽  
Melissa DeJonckheere ◽  
Tanmay Basu ◽  
Elizabeth Scruggs ◽  
...  

BACKGROUND Qualitative research methods are increasingly being used across disciplines because of their ability to help investigators understand the perspectives of participants in their own words. However, qualitative analysis is a laborious and resource-intensive process. To achieve depth, researchers are limited to smaller sample sizes when analyzing text data. One potential method to address this concern is natural language processing (NLP). Qualitative text analysis involves researchers reading data, assigning code labels, and iteratively developing findings; NLP has the potential to automate part of this process. Unfortunately, little methodological research has been done to compare automatic coding using NLP techniques and qualitative coding, which is critical to establish the viability of NLP as a useful, rigorous analysis procedure. OBJECTIVE The purpose of this study was to compare the utility of a traditional qualitative text analysis, an NLP analysis, and an augmented approach that combines qualitative and NLP methods. METHODS We conducted a 2-arm cross-over experiment to compare qualitative and NLP approaches to analyze data generated through 2 text (short message service) message survey questions, one about prescription drugs and the other about police interactions, sent to youth aged 14-24 years. We randomly assigned a question to each of the 2 experienced qualitative analysis teams for independent coding and analysis before receiving NLP results. A third team separately conducted NLP analysis of the same 2 questions. We examined the results of our analyses to compare (1) the similarity of findings derived, (2) the quality of inferences generated, and (3) the time spent in analysis. RESULTS The qualitative-only analysis for the drug question (n=58) yielded 4 major findings, whereas the NLP analysis yielded 3 findings that missed contextual elements. The qualitative and NLP-augmented analysis was the most comprehensive. For the police question (n=68), the qualitative-only analysis yielded 4 primary findings and the NLP-only analysis yielded 4 slightly different findings. Again, the augmented qualitative and NLP analysis was the most comprehensive and produced the highest quality inferences, increasing our depth of understanding (ie, details and frequencies). In terms of time, the NLP-only approach was quicker than the qualitative-only approach for the drug (120 vs 270 minutes) and police (40 vs 270 minutes) questions. An approach beginning with qualitative analysis followed by qualitative- or NLP-augmented analysis took longer time than that beginning with NLP for both drug (450 vs 240 minutes) and police (390 vs 220 minutes) questions. CONCLUSIONS NLP provides both a foundation to code qualitatively more quickly and a method to validate qualitative findings. NLP methods were able to identify major themes found with traditional qualitative analysis but were not useful in identifying nuances. Traditional qualitative text analysis added important details and context.


2019 ◽  
Vol 8 (3) ◽  
pp. 5713-5717

In our everyday life, we are seeing a great deal of visionless people in our general public. These individuals face challenges with their ordinary exercises, for example, perusing, strolling, driving, mingling and composing. Braille Script is a technique that is broadly utilized by visionless people to peruse and compose. Braille Script is a system that is commonly used by visionless individuals to examine and form. Braille Code generally contains cells of brought spots organized up in a system to etch characters on paper. Trance People can identify the proximity and nonappearance of spots using their fingertips, giving them the code for picture. Its characters are six-spot cells, with segments and three lines. The musing is completed on a present understanding structure united with a constrained state machine with certain setting organizing and elucidation rules. A system is proposed for changing over Braille codes to Tamil voice message executed using Python in Natural Language Processing which can be scrutinized out to various through the PC. In this paper Braille code is expelled from data picture and it is mapped to the Tamil database and held up.


2020 ◽  
Author(s):  
Michelle B. Leavy ◽  
Danielle Cooke ◽  
Sarah Hajjar ◽  
Erik Bikelman ◽  
Bailey Egan ◽  
...  

Background: Major depressive disorder is a common mental disorder. Many pressing questions regarding depression treatment and outcomes exist, and new, efficient research approaches are necessary to address them. The primary objective of this project is to demonstrate the feasibility and value of capturing the harmonized depression outcome measures in the clinical workflow and submitting these data to different registries. Secondary objectives include demonstrating the feasibility of using these data for patient-centered outcomes research and developing a toolkit to support registries interested in sharing data with external researchers. Methods: The harmonized outcome measures for depression were developed through a multi-stakeholder, consensus-based process supported by AHRQ. For this implementation effort, the PRIME Registry, sponsored by the American Board of Family Medicine, and PsychPRO, sponsored by the American Psychiatric Association, each recruited 10 pilot sites from existing registry sites, added the harmonized measures to the registry platform, and submitted the project for institutional review board review Results: The process of preparing each registry to calculate the harmonized measures produced three major findings. First, some clarifications were necessary to make the harmonized definitions operational. Second, some data necessary for the measures are not routinely captured in structured form (e.g., PHQ-9 item 9, adverse events, suicide ideation and behavior, and mortality data). Finally, capture of the PHQ-9 requires operational and technical modifications. The next phase of this project will focus collection of the baseline and follow-up PHQ-9s, as well as other supporting clinical documentation. In parallel to the data collection process, the project team will examine the feasibility of using natural language processing to extract information on PHQ-9 scores, adverse events, and suicidal behaviors from unstructured data. Conclusion: This pilot project represents the first practical implementation of the harmonized outcome measures for depression. Initial results indicate that it is feasible to calculate the measures within the two patient registries, although some challenges were encountered related to the harmonized definition specifications, the availability of the necessary data, and the clinical workflow for collecting the PHQ-9. The ongoing data collection period, combined with an evaluation of the utility of natural language processing for these measures, will produce more information about the practical challenges, value, and burden of using the harmonized measures in the primary care and mental health setting. These findings will be useful to inform future implementations of the harmonized depression outcome measures.


2021 ◽  
Author(s):  
Amrita De ◽  
Ming Huang ◽  
Tinghao Feng ◽  
Xiaomeng Yue ◽  
Lixia Yao

BACKGROUND Patient portals tethered to electronic health records systems have become attractive web platforms since the enacting of the Medicare Access and Children’s Health Insurance Program Reauthorization Act and the introduction of the <i>Meaningful Use</i> program in the United States. Patients can conveniently access their health records and seek consultation from providers through secure web portals. With increasing adoption and patient engagement, the volume of patient secure messages has risen substantially, which opens up new research and development opportunities for patient-centered care. OBJECTIVE This study aims to develop a data model for patient secure messages based on the Fast Healthcare Interoperability Resources (FHIR) standard to identify and extract significant information. METHODS We initiated the first draft of the data model by analyzing FHIR and manually reviewing 100 sentences randomly sampled from more than 2 million patient-generated secure messages obtained from the online patient portal at the Mayo Clinic Rochester between February 18, 2010, and December 31, 2017. We then annotated additional sets of 100 randomly selected sentences using the Multi-purpose Annotation Environment tool and updated the data model and annotation guideline iteratively until the interannotator agreement was satisfactory. We then created a larger corpus by annotating 1200 randomly selected sentences and calculated the frequency of the identified medical concepts in these sentences. Finally, we performed topic modeling analysis to learn the hidden topics of patient secure messages related to 3 highly mentioned microconcepts, namely, fatigue, prednisone, and patient visit, and to evaluate the proposed data model independently. RESULTS The proposed data model has a 3-level hierarchical structure of health system concepts, including 3 macroconcepts, 28 mesoconcepts, and 85 microconcepts. Foundation and base macroconcepts comprise 33.99% (841/2474), clinical macroconcepts comprise 64.38% (1593/2474), and financial macroconcepts comprise 1.61% (40/2474) of the annotated corpus. The top 3 mesoconcepts among the 28 mesoconcepts are condition (505/2474, 20.41%), medication (424/2474, 17.13%), and practitioner (243/2474, 9.82%). Topic modeling identified hidden topics of patient secure messages related to fatigue, prednisone, and patient visit. A total of 89.2% (107/120) of the top-ranked topic keywords are actually the health concepts of the data model. CONCLUSIONS Our data model and annotated corpus enable us to identify and understand important medical concepts in patient secure messages and prepare us for further natural language processing analysis of such free texts. The data model could be potentially used to automatically identify other types of patient narratives, such as those in various social media and patient forums. In the future, we plan to develop a machine learning and natural language processing solution to enable automatic triaging solutions to reduce the workload of clinicians and perform more granular content analysis to understand patients’ needs and improve patient-centered care.


Sign in / Sign up

Export Citation Format

Share Document