scholarly journals Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning

Author(s):  
Zhaojiang Lin ◽  
Andrea Madotto ◽  
Pascale Fung
Author(s):  
Kelvin Guu ◽  
Tatsunori B. Hashimoto ◽  
Yonatan Oren ◽  
Percy Liang

We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.


Author(s):  
Shu Jiang ◽  
Zuchao Li ◽  
Hai Zhao ◽  
Bao-Liang Lu ◽  
Rui Wang

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.


2020 ◽  
Vol 389 ◽  
pp. 93-107
Author(s):  
Jinmeng Wu ◽  
Tingting Mu ◽  
Jeyarajan Thiyagalingam ◽  
John Y. Goulermas

2021 ◽  
Author(s):  
Lele Yu ◽  
Shaowu Zhang ◽  
Yijia Zhang ◽  
Hongfei Lin

BACKGROUND Happiness refers to the joyful and pleasant emotions that humans produce subjectively. It is the positive part of emotions, and it affects the quality of human life. Therefore, understanding human happiness is a meaningful task in sentiment analysis. We mainly discuss two facets (Agency/Sociality) of happiness in this study. Through analysis and research on happiness, we can expand on new concepts that define happiness and enrich our understanding of emotions. OBJECTIVE In this paper, we treated each happy moment as a sequence of short sentences, then proposed a short happiness detection model based on transfer learning to analyze the Agency and Sociality aspects of happiness. METHODS Happiness analysis is a novel and challenging research task. However, the current dataset in the field of happiness is small. To solve this problem,we utilized the unlabeled training set and transfer learning to train a semantically enhanced language model in the target domain. Then, the trained language model with domain characteristics was further combined with other deep learning models to obtain various models. Finally, we used the improved voting strategy to further improve the experimental results. RESULTS The proposed approach was evaluated on the public dataset. Experimental results showed that our approach significantly outperforms the baselines. When predicting the Agency aspect of happiness, our approach achieved an accuracy of 0.8574 and an F1 score of 0.90, repectively. When predicting Sociality, our approach achieved an accuracy of 0.928 and an F1 score of 0.9360, respectively. CONCLUSIONS Through the evaluation of the dataset, the comparison results demonstrated the effectiveness of our approach for happiness analysis. Experimental results confirmed that our method achieved state-of-the-art performance and transfer learning effectively improved happiness analysis.


Sign in / Sign up

Export Citation Format

Share Document