Health Natural Language Processing: Methodology Development and Applications (Preprint)

2020 ◽  
Author(s):  
Tianyong Hao ◽  
Zhengxing Huang ◽  
Likeng Liang ◽  
Heng Weng ◽  
Buzhou Tang

UNSTRUCTURED With the rapid growth of information technology, the necessity for processing massive amounts of health and medical data utilizing advanced information technologies has also grown. A large amount of valuable data exists in natural text such as free diagnosis text, discharge summaries, online health discussions, eligibility criteria of clinical trials, and so on. Health natural language processing automatically analyzes the commonalities and differences of large amounts of text data and recommend appropriate actions on behalf of domain experts to assist medical decision making. This editorial shares the methodology innovation of health natural language processing and its applications in medial domain.

2020 ◽  
Author(s):  
David DeFranza ◽  
Himanshu Mishra ◽  
Arul Mishra

Language provides an ever-present context for our cognitions and has the ability to shape them. Languages across the world can be gendered (language in which the form of noun, verb, or pronoun is presented as female or male) versus genderless. In an ongoing debate, one stream of research suggests that gendered languages are more likely to display gender prejudice than genderless languages. However, another stream of research suggests that language does not have the ability to shape gender prejudice. In this research, we contribute to the debate by using a Natural Language Processing (NLP) method which captures the meaning of a word from the context in which it occurs. Using text data from Wikipedia and the Common Crawl project (which contains text from billions of publicly facing websites) across 45 world languages, covering the majority of the world’s population, we test for gender prejudice in gendered and genderless languages. We find that gender prejudice occurs more in gendered rather than genderless languages. Moreover, we examine whether genderedness of language influences the stereotypic dimensions of warmth and competence utilizing the same NLP method.


Vector representations for language have been shown to be useful in a number of Natural Language Processing tasks. In this paper, we aim to investigate the effectiveness of word vector representations for the problem of Sentiment Analysis. In particular, we target three sub-tasks namely sentiment words extraction, polarity of sentiment words detection, and text sentiment prediction. We investigate the effectiveness of vector representations over different text data and evaluate the quality of domain-dependent vectors. Vector representations has been used to compute various vector-based features and conduct systematically experiments to demonstrate their effectiveness. Using simple vector based features can achieve better results for text sentiment analysis of APP.


Author(s):  
Nazmun Nessa Moon ◽  
Imrus Salehin ◽  
Masuma Parvin ◽  
Md. Mehedi Hasan ◽  
Iftakhar Mohammad Talha ◽  
...  

<span>In this study we have described the process of identifying unnecessary video using an advanced combined method of natural language processing and machine learning. The system also includes a framework that contains analytics databases and which helps to find statistical accuracy and can detect, accept or reject unnecessary and unethical video content. In our video detection system, we extract text data from video content in two steps, first from video to MPEG-1 audio layer 3 (MP3) and then from MP3 to WAV format. We have used the text part of natural language processing to analyze and prepare the data set. We use both Naive Bayes and logistic regression classification algorithms in this detection system to determine the best accuracy for our system. In our research, our video MP4 data has converted to plain text data using the python advance library function. This brief study discusses the identification of unauthorized, unsocial, unnecessary, unfinished, and malicious videos when using oral video record data. By analyzing our data sets through this advanced model, we can decide which videos should be accepted or rejected for the further actions.</span>


2018 ◽  
Author(s):  
Jeremy Petch ◽  
Jane Batt ◽  
Joshua Murray ◽  
Muhammad Mamdani

BACKGROUND The increasing adoption of electronic health records (EHRs) in clinical practice holds the promise of improving care and advancing research by serving as a rich source of data, but most EHRs allow clinicians to enter data in a text format without much structure. Natural language processing (NLP) may reduce reliance on manual abstraction of these text data by extracting clinical features directly from unstructured clinical digital text data and converting them into structured data. OBJECTIVE This study aimed to assess the performance of a commercially available NLP tool for extracting clinical features from free-text consult notes. METHODS We conducted a pilot, retrospective, cross-sectional study of the accuracy of NLP from dictated consult notes from our tuberculosis clinic with manual chart abstraction as the reference standard. Consult notes for 130 patients were extracted and processed using NLP. We extracted 15 clinical features from these consult notes and grouped them a priori into categories of simple, moderate, and complex for analysis. RESULTS For the primary outcome of overall accuracy, NLP performed best for features classified as simple, achieving an overall accuracy of 96% (95% CI 94.3-97.6). Performance was slightly lower for features of moderate clinical and linguistic complexity at 93% (95% CI 91.1-94.4), and lowest for complex features at 91% (95% CI 87.3-93.1). CONCLUSIONS The findings of this study support the use of NLP for extracting clinical features from dictated consult notes in the setting of a tuberculosis clinic. Further research is needed to fully establish the validity of NLP for this and other purposes.


Sentiment Classification is one of the well-known and most popular domain of machine learning and natural language processing. An algorithm is developed to understand the opinion of an entity similar to human beings. This research fining article presents a similar to the mention above. Concept of natural language processing is considered for text representation. Later novel word embedding model is proposed for effective classification of the data. Tf-IDF and Common BoW representation models were considered for representation of text data. Importance of these models are discussed in the respective sections. The proposed is testing using IMDB datasets. 50% training and 50% testing with three random shuffling of the datasets are used for evaluation of the model.


2021 ◽  
Author(s):  
Minoru Yoshida ◽  
Kenji Kita

Both words and numerals are tokens found in almost all documents but they have different properties. However, relatively little attention has been paid in numerals found in texts and many systems treated the numbers found in the document in ad-hoc ways, such as regarded them as mere strings in the same way as words, normalized them to zeros, or simply ignored them. Recent growth of natural language processing (NLP) research areas has change this situations and more and more attentions have been paid to the numeracy in documents. In this survey, we provide a quick overview of the history and recent advances of the research of mining such relations between numerals and words found in text data.


2019 ◽  
Vol 35 (S1) ◽  
pp. 10-10
Author(s):  
Merve Gökgöl ◽  
Zeynep Orhan

IntroductionThis study aimed to reach patients using different languages while providing an opportunity to enter symptoms in their everyday language text besides medical expressions of symptoms.MethodologyNamed entity recognition (NER) techniques, based on natural language processing (NLP), were applied to develop a language independent predictive model. The research was based on extracting symptoms entered to the system by patient using NER method of NLP. In order to implement the system, python was used while pre-processing the data and string similarity function was used to estimate similarity with disease symptoms. Two sets were used for classification, one including only symptoms, and the other the matching diseases. Four thousand two hundred and eighty different symptoms were processed for the corresponding 880 diseases.ResultsEach user symptom had a similarity score for each symptom in all diseases. Top N results with highest similarities were chosen from this list. The final N results are matched with diseases. According to these results, matched diseases were ordered in terms of the percentage of matched symptoms in the disease's symptoms. Extracted terms were implied as an input of the model and analysed for a matching diagnosis where an accuracy of 83 percent was accomplished when it is tested and compared using Mayo Clinic data for specific foreign languages other than English.ConclusionThis language independent online diagnostic tool is a solution for both personal and clinical use and provides maintainable, updatable and more reliable diagnostics. This tool is particularly relevant today, with global mobility growing at a rate faster than the world`s population. We aim to upgrade the system by adding speech recognition and engaging it with the background (if available, electronic health records) of the patient.


Sign in / Sign up

Export Citation Format

Share Document