Similar floor plan retrieval featuring multi-task learning of layout type classification and room presence prediction

Author(s):  
Yuki Takada ◽  
Naoto Inoue ◽  
Toshihiko Yamasaki ◽  
Kiyoharu Aizawa
Author(s):  
Quanzhi Li ◽  
Qiong Zhang

There is massive amount of news on financial events every day. In this paper, we present a unified model for detecting, classifying and summarizing financial events. This model exploits a multi-task learning approach, in which a pre-trained BERT model is used to encode the news articles, and the encoded information are shared by event type classification, detection and summarization tasks. For event summarization, we use a Transformer structure as the decoder. In addition to the input document encoded by BERT, the decoder also utilizes the predicted event type and cluster information, so that it can focus on the specific aspects of the event when generating summary. Our experiments show that our approach outperforms other methods.


Author(s):  
Markus Weber ◽  
Christoph Langenhan ◽  
Thomas Roth-Berghofer ◽  
Marcus Liwicki ◽  
Andreas Dengel ◽  
...  

Author(s):  
Bo Shao ◽  
Yeyun Gong ◽  
Junwei Bao ◽  
Jianshu Ji ◽  
Guihong Cao ◽  
...  

Semantic parsing is a challenging and important task which aims to convert a natural language sentence to a logical form. Existing neural semantic parsing methods mainly use <question, logical form> (Q-L) pairs to train a sequence-to-sequence model. However, the amount of existing Q-L labeled data is limited and hard to obtain. We propose an effective method which substantially utilizes labeling information from other tasks to enhance the training of a semantic parser. We design a multi-task learning model to train question type classification, entity mention detection together with question semantic parsing using a shared encoder. We propose a weakly supervised learning method to enhance our multi-task learning model with paraphrase data, based on the idea that the paraphrased questions should have the same logical form and question type information. Finally, we integrate the weakly supervised multi-task learning method to an encoder-decoder framework. Experiments on a newly constructed dataset and ComplexWebQuestions show that our proposed method outperforms state-of-the-art methods which demonstrates the effectiveness and robustness of our method.


Sign in / Sign up

Export Citation Format

Share Document