Improving Low-Resource Chinese Event Detection with Multi-task Learning

Author(s):  
Meihan Tong ◽  
Bin Xu ◽  
Shuai Wang ◽  
Lei Hou ◽  
Juaizi Li
2021 ◽  
Vol 13 (5) ◽  
pp. 168781402110131
Author(s):  
Junfeng Wu ◽  
Li Yao ◽  
Bin Liu ◽  
Zheyuan Ding ◽  
Lei Zhang

As more and more sensor data have been collected, automated detection, and diagnosis systems are urgently needed to lessen the increasing monitoring burden and reduce the risk of system faults. A plethora of researches have been done on anomaly detection, event detection, anomaly diagnosis respectively. However, none of current approaches can explore all these respects in one unified framework. In this work, a Multi-Task Learning based Encoder-Decoder (MTLED) which can simultaneously detect anomalies, diagnose anomalies, and detect events is proposed. In MTLED, feature matrix is introduced so that features are extracted for each time point and point-wise anomaly detection can be realized in an end-to-end way. Anomaly diagnosis and event detection share the same feature matrix with anomaly detection in the multi-task learning framework and also provide important information for system monitoring. To train such a comprehensive detection and diagnosis system, a large-scale multivariate time series dataset which contains anomalies of multiple types is generated with simulation tools. Extensive experiments on the synthetic dataset verify the effectiveness of MTLED and its multi-task learning framework, and the evaluation on a real-world dataset demonstrates that MTLED can be used in other application scenarios through transfer learning.


2021 ◽  
pp. 452-463
Author(s):  
Yanxia Qin ◽  
Jingjing Ding ◽  
Yiping Sun ◽  
Xiangwu Ding
Keyword(s):  

2021 ◽  
pp. 1-1
Author(s):  
Han Liang ◽  
Wanting Ji ◽  
Ruili Wang ◽  
Yaxiong Ma ◽  
Jincai Chen ◽  
...  

2018 ◽  
Vol 6 ◽  
pp. 225-240 ◽  
Author(s):  
Eliyahu Kiperwasser ◽  
Miguel Ballesteros

Neural encoder-decoder models of machine translation have achieved impressive results, while learning linguistic knowledge of both the source and target languages in an implicit end-to-end manner. We propose a framework in which our model begins learning syntax and translation interleaved, gradually putting more focus on translation. Using this approach, we achieve considerable improvements in terms of BLEU score on relatively large parallel corpus (WMT14 English to German) and a low-resource (WIT German to English) setup.


2018 ◽  
Vol 8 (8) ◽  
pp. 1397 ◽  
Author(s):  
Veronica Morfi ◽  
Dan Stowell

In training a deep learning system to perform audio transcription, two practical problems may arise. Firstly, most datasets are weakly labelled, having only a list of events present in each recording without any temporal information for training. Secondly, deep neural networks need a very large amount of labelled training data to achieve good quality performance, yet in practice it is difficult to collect enough samples for most classes of interest. In this paper, we propose factorising the final task of audio transcription into multiple intermediate tasks in order to improve the training performance when dealing with this kind of low-resource datasets. We evaluate three data-efficient approaches of training a stacked convolutional and recurrent neural network for the intermediate tasks. Our results show that different methods of training have different advantages and disadvantages.


2020 ◽  
Vol 22 (3) ◽  
pp. 569-578
Author(s):  
Xianjun Xia ◽  
Roberto Togneri ◽  
Ferdous Sohel ◽  
Yuanjun Zhao ◽  
Defeng Huang

Author(s):  
Yong Hu ◽  
Heyan Huang ◽  
Tian Lan ◽  
Xiaochi Wei ◽  
Yuxiang Nie ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document