scholarly journals Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?

Author(s):  
Yada Pruksachatkun ◽  
Jason Phang ◽  
Haokun Liu ◽  
Phu Mon Htut ◽  
Xiaoyi Zhang ◽  
...  
2020 ◽  
Vol 191 ◽  
pp. 105233 ◽  
Author(s):  
Xin Zheng ◽  
Luyue Lin ◽  
Bo Liu ◽  
Yanshan Xiao ◽  
Xiaoming Xiong

2018 ◽  
Author(s):  
Alon Rozental ◽  
Daniel Fleischer ◽  
Zohar Kelrich

2019 ◽  
Author(s):  
Derek Howard ◽  
Marta M Maslej ◽  
Justin Lee ◽  
Jacob Ritchie ◽  
Geoffrey Woollard ◽  
...  

BACKGROUND Mental illness affects a significant portion of the worldwide population. Online mental health forums can provide a supportive environment for those afflicted and also generate a large amount of data that can be mined to predict mental health states using machine learning methods. OBJECTIVE This study aimed to benchmark multiple methods of text feature representation for social media posts and compare their downstream use with automated machine learning (AutoML) tools. We tested on datasets that contain posts labeled for perceived suicide risk or moderator attention in the context of self-harm. Specifically, we assessed the ability of the methods to prioritize posts that a moderator would identify for immediate response. METHODS We used 1588 labeled posts from the Computational Linguistics and Clinical Psychology (CLPsych) 2017 shared task collected from the Reachout.com forum. Posts were represented using lexicon-based tools, including Valence Aware Dictionary and sEntiment Reasoner, Empath, and Linguistic Inquiry and Word Count, and also using pretrained artificial neural network models, including DeepMoji, Universal Sentence Encoder, and Generative Pretrained Transformer-1 (GPT-1). We used Tree-based Optimization Tool and Auto-Sklearn as AutoML tools to generate classifiers to triage the posts. RESULTS The top-performing system used features derived from the GPT-1 model, which was fine-tuned on over 150,000 unlabeled posts from Reachout.com. Our top system had a macroaveraged F1 score of 0.572, providing a new state-of-the-art result on the CLPsych 2017 task. This was achieved without additional information from metadata or preceding posts. Error analyses revealed that this top system often misses expressions of hopelessness. In addition, we have presented visualizations that aid in the understanding of the learned classifiers. CONCLUSIONS In this study, we found that transfer learning is an effective strategy for predicting risk with relatively little labeled data and noted that fine-tuning of pretrained language models provides further gains when large amounts of unlabeled text are available.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 20245-20256 ◽  
Author(s):  
Junying Gan ◽  
Li Xiang ◽  
Yikui Zhai ◽  
Chaoyun Mai ◽  
Guohui He ◽  
...  

2010 ◽  
Vol 58 (7) ◽  
pp. 866-871 ◽  
Author(s):  
Fernando Fernández ◽  
Javier García ◽  
Manuela Veloso

Sign in / Sign up

Export Citation Format

Share Document