Senti-BAS: A BERT-based model with sentiment computing for happiness research (Preprint)

2021 ◽  
Author(s):  
Zeyuan Zeng ◽  
Yijia Zhang ◽  
Liang Yang ◽  
Hongfei Lin

BACKGROUND Happiness becomes a rising topic that we all care about recently. It can be described in various forms. For the text content, it is an interesting subject that we can do research on happiness by utilizing natural language processing (NLP) methods. OBJECTIVE As an abstract and complicated emotion, there is no common criterion to measure and describe happiness. Therefore, researchers are creating different models to study and measure happiness. METHODS In this paper, we present a deep-learning based model called Senti-BAS (BERT embedded Bi-LSTM with self-Attention mechanism along with the Sentiment computing). RESULTS Given a sentence that describes how a person felt happiness recently, the model can classify the happiness scenario in the sentence with two topics: was it controlled by the author (label ‘agency’), and was it involving other people (label ‘social’). Besides language models, we employ the label information through sentiment computing based on lexicon. CONCLUSIONS The model performs with a high accuracy on both ‘agency’ and ‘social’ labels, and we also make comparisons with several popular embedding models like Elmo, GPT. Depending on our work, we can study the happiness at a more fine-grained level.

2021 ◽  
Author(s):  
Oscar Nils Erik Kjell ◽  
H. Andrew Schwartz ◽  
Salvatore Giorgi

The language that individuals use for expressing themselves contains rich psychological information. Recent significant advances in Natural Language Processing (NLP) and Deep Learning (DL), namely transformers, have resulted in large performance gains in tasks related to understanding natural language such as machine translation. However, these state-of-the-art methods have not yet been made easily accessible for psychology researchers, nor designed to be optimal for human-level analyses. This tutorial introduces text (www.r-text.org), a new R-package for analyzing and visualizing human language using transformers, the latest techniques from NLP and DL. Text is both a modular solution for accessing state-of-the-art language models and an end-to-end solution catered for human-level analyses. Hence, text provides user-friendly functions tailored to test hypotheses in social sciences for both relatively small and large datasets. This tutorial describes useful methods for analyzing text, providing functions with reliable defaults that can be used off-the-shelf as well as providing a framework for the advanced users to build on for novel techniques and analysis pipelines. The reader learns about six methods: 1) textEmbed: to transform text to traditional or modern transformer-based word embeddings (i.e., numeric representations of words); 2) textTrain: to examine the relationships between text and numeric/categorical variables; 3) textSimilarity and 4) textSimilarityTest: to computing semantic similarity scores between texts and significance test the difference in meaning between two sets of texts; and 5) textProjection and 6) textProjectionPlot: to examine and visualize text within the embedding space according to latent or specified construct dimensions (e.g., low to high rating scale scores).


2020 ◽  
Author(s):  
Jacob Johnson ◽  
Grace Qiu ◽  
Christine Lamoureux ◽  
Jennifer Ngo ◽  
Lawrence Ngo

AbstractThough sophisticated algorithms have been developed for the classification of free-text radiology reports for pulmonary embolism (PE), their overall generalizability remains unvalidated given limitations in sample size and data homogeneity. We developed and validated a highly generalizable deep-learning based NLP algorithm for this purpose with data sourced from over 2,000 hospital sites and 500 radiologists. The algorithm achieved an AUCROC of 0.995 on chest angiography studies and 0.994 on non-angiography studies for the presence or absence of PE. The high accuracy achieved on this large and heterogeneous dataset allows for the possibility of application in large multi-center radiology practices as well as for deployment at novel sites without significant degradation in performance.


2021 ◽  
Author(s):  
Yoojoong Kim ◽  
Jeong Moon Lee ◽  
Moon Joung Jang ◽  
Yun Jin Yum ◽  
Jong-Ho Kim ◽  
...  

BACKGROUND With advances in deep learning and natural language processing, analyzing medical texts is becoming increasingly important. Nonetheless, a study on medical-specific language models has not yet been conducted given the importance of medical texts. OBJECTIVE Korean medical text is highly difficult to analyze because of the agglutinative characteristics of the language as well as the complex terminologies in the medical domain. To solve this problem, we collected a Korean medical corpus and used it to train language models. METHODS In this paper, we present a Korean medical language model based on deep learning natural language processing. The proposed model was trained using the pre-training framework of BERT for the medical context based on a state-of-the-art Korean language model. RESULTS After pre-training, the proposed method showed increased accuracies of 0.147 and 0.148 for the masked language model with next sentence prediction. In the intrinsic evaluation, the next sentence prediction accuracy improved by 0.258, which is a remarkable enhancement. In addition, the extrinsic evaluation of Korean medical semantic textual similarity data showed a 0.046 increase in the Pearson correlation. CONCLUSIONS The results demonstrated the superiority of the proposed model for Korean medical natural language processing. We expect that our proposed model can be extended for application to various languages and domains.


2021 ◽  
Vol 11 ◽  
Author(s):  
Hong-Jie Dai ◽  
Chu-Hsien Su ◽  
You-Qian Lee ◽  
You-Chen Zhang ◽  
Chen-Kai Wang ◽  
...  

The introduction of pre-trained language models in natural language processing (NLP) based on deep learning and the availability of electronic health records (EHRs) presents a great opportunity to transfer the “knowledge” learned from data in the general domain to enable the analysis of unstructured textual data in clinical domains. This study explored the feasibility of applying NLP to a small EHR dataset to investigate the power of transfer learning to facilitate the process of patient screening in psychiatry. A total of 500 patients were randomly selected from a medical center database. Three annotators with clinical experience reviewed the notes to make diagnoses for major/minor depression, bipolar disorder, schizophrenia, and dementia to form a small and highly imbalanced corpus. Several state-of-the-art NLP methods based on deep learning along with pre-trained models based on shallow or deep transfer learning were adapted to develop models to classify the aforementioned diseases. We hypothesized that the models that rely on transferred knowledge would be expected to outperform the models learned from scratch. The experimental results demonstrated that the models with the pre-trained techniques outperformed the models without transferred knowledge by micro-avg. and macro-avg. F-scores of 0.11 and 0.28, respectively. Our results also suggested that the use of the feature dependency strategy to build multi-labeling models instead of problem transformation is superior considering its higher performance and simplicity in the training process.


2021 ◽  
Author(s):  
Benjamin Joseph Ricard ◽  
Saeed Hassanpour

BACKGROUND Many social media studies have explored the ability of thematic structures, such as hashtags and subreddits, to identify information related to a wide variety of mental health disorders. However, studies and models trained on specific themed communities are often difficult to apply to different social media platforms and related outcomes. A deep learning framework using thematic structures from Reddit and Twitter can have distinct advantages for studying alcohol abuse, particularly among the youth, in the United States. OBJECTIVE This study proposes a new deep learning pipeline that uses thematic structures to identify alcohol-related content across different platforms. We applied our method on Twitter to determine the association of the prevalence of alcohol-related tweets and alcohol-related outcomes reported from the National Institute of Alcoholism and Alcohol Abuse (NIAAA), Centers for Disease Control Behavioral Risk Factor Surveillance System (CDC BRFSS), County Health Rankings, and the National Industry Classification System (NAICS). METHODS A Bidirectional Encoder Representations from Transformers (BERT) neural network learned to classify 1,302,524 Reddit posts as either alcohol-related or control subreddits. The trained model identified 24 alcohol-related hashtags from an unlabeled dataset of 843,769 random tweets. Querying alcohol-related hashtags identified 25,558,846 alcohol-related tweets, including 790,544 location-specific (geotagged) tweets. We calculated the correlation of the prevalence of alcohol-related tweets with alcohol-related outcomes, controlling for confounding effects from age, sex, income, education, and self-reported race, as recorded by the 2013-2018 American Community Survey (ACS). RESULTS Here, we present a novel natural language processing pipeline developed using Reddit alcohol-related subreddits that identifies highly specific alcohol-related Twitter hashtags. Prevalence of identified hashtags contains interpretable information about alcohol consumption at both coarse (e.g., U.S. State) and fine-grained (e.g., MMSA, County) geographical designations. This approach can expand research and interventions on alcohol abuse and other behavioral health outcomes. CONCLUSIONS Here, we present a novel natural language processing pipeline developed using Reddit alcohol-related subreddits that identifies highly specific alcohol-related Twitter hashtags. Prevalence of identified hashtags contains interpretable information about alcohol consumption at both coarse (e.g., U.S. State) and fine-grained (e.g., MMSA, County) geographical designations. This approach can expand research and interventions on alcohol abuse and other behavioral health outcomes.


AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 1-16
Author(s):  
Juan Cruz-Benito ◽  
Sanjay Vishwakarma ◽  
Francisco Martin-Fernandez ◽  
Ismael Faro

In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context.


Sign in / Sign up

Export Citation Format

Share Document