scholarly journals Enhancing Label Correlation Feedback in Multi-Label Text Classification via Multi-Task Learning

Author(s):  
Ximing Zhang ◽  
Qian-Wen Zhang ◽  
Zhao Yan ◽  
Ruifang Liu ◽  
Yunbo Cao
Author(s):  
Dr. I. Jeena Jacob

The classification of the text involving the process of identification and categorization of text is a tedious and a challenging task too. The Capsules Network (Caps-Net) which is a unique architecture with the capability to confiscate the basic attributes comprising the insights of the particular field that could help in bridging the knowledge gap existing between the source and the destination tasks and capability learn more robust representation than the CNN-Convolutional neural networks in the image classification domain is utilized in the paper to classify the text. As the multi –task learning capability enables to part insights between the tasks that are related and enhances data used in training indirectly, the Caps-Net based multi task learning frame work is proposed in the paper. The proposed architecture including the Caps-Net effectively classifies the text and minimizes the interference experienced among the multiple tasks in the multi –task learning. The architecture put forward is evaluated using various text classification dataset ensuring the efficacy of the proffered frame work


2018 ◽  
Vol 32 (11) ◽  
pp. 6467-6480
Author(s):  
Guangquan Lu ◽  
Jiangzhang Gan ◽  
Jian Yin ◽  
Zhiping Luo ◽  
Bo Li ◽  
...  

2021 ◽  
Author(s):  
Huiting Liu ◽  
Geng Chen ◽  
Peipei Li ◽  
Peng Zhao ◽  
Xindong Wu

Author(s):  
Pengfei Liu ◽  
Jie Fu ◽  
Yue Dong ◽  
Xipeng Qiu ◽  
Jackie Chi Kit Cheung

We present two architectures for multi-task learning with neural sequence models. Our approach allows the relationships between different tasks to be learned dynamically, rather than using an ad-hoc pre-defined structure as in previous work. We adopt the idea from message-passing graph neural networks, and propose a general graph multi-task learning framework in which different tasks can communicate with each other in an effective and interpretable way. We conduct extensive experiments in text classification and sequence labelling to evaluate our approach on multi-task learning and transfer learning. The empirical results show that our models not only outperform competitive baselines, but also learn interpretable and transferable patterns across tasks.


Author(s):  
Honglun Zhang ◽  
Liqiang Xiao ◽  
Yongkun Wang ◽  
Yaohui Jin

Multi-task learning leverages potential correlations among related tasks to extract common features and yield performance gains. However, most previous works only consider simple or weak interactions, thereby failing to model complex correlations among three or more tasks. In this paper, we propose a multi-task learning architecture with four types of recurrent neural layers to fuse information across multiple related tasks. The architecture is structurally flexible and considers various interactions among tasks, which can be regarded as a generalized case of many previous works. Extensive experiments on five benchmark datasets for text classification show that our model can significantly improve performances of related tasks with additional information from others.


2017 ◽  
Vol 21 (6) ◽  
pp. 1371-1392 ◽  
Author(s):  
Zhiyang He ◽  
Ji Wu ◽  
Ping Lv

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 86380-86387 ◽  
Author(s):  
Wei Zhao ◽  
Hui Gao ◽  
Shuhui Chen ◽  
Nan Wang

Sign in / Sign up

Export Citation Format

Share Document