scholarly journals Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based QA Framework

Author(s):  
Abhilash Nandy ◽  
Soumya Sharma ◽  
Shubham Maddhashiya ◽  
Kapil Sachdeva ◽  
Pawan Goyal ◽  
...  
2020 ◽  
Vol 34 (05) ◽  
pp. 8010-8017 ◽  
Author(s):  
Di Jin ◽  
Shuyang Gao ◽  
Jiun-Yu Kao ◽  
Tagyoung Chung ◽  
Dilek Hakkani-tur

Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets.


Author(s):  
Wenyu Du ◽  
Baocheng Li ◽  
Min Yang ◽  
Qiang Qu ◽  
Ying Shen

In this paper, we propose a Multi-Task learning approach for Answer Selection (MTAS), motivated by the fact that humans have no difficulty performing such task because they possess capabilities of multiple domains (tasks). Specifically, MTAS consists of two key components: (i) A category classification model that learns rich category-aware document representation; (ii) An answer selection model that provides the matching scores of question-answer pairs. These two tasks work on a shared document encoding layer, and they cooperate to learn a high-quality answer selection system. In addition, a multi-head attention mechanism is proposed to learn important information from different representation subspaces at different positions. We manually annotate the first Chinese question answering dataset in law domain (denoted as LawQA) to evaluate the effectiveness of our model. The experimental results show that our model MTAS consistently outperforms the compared methods.1


2021 ◽  
Author(s):  
Zhaoquan Yuan ◽  
Xiao Peng ◽  
Xiao Wu ◽  
Changsheng Xu

2019 ◽  
Author(s):  
Tao Shen ◽  
Xiubo Geng ◽  
Tao Qin ◽  
Daya Guo ◽  
Duyu Tang ◽  
...  

2019 ◽  
Vol 171 ◽  
pp. 106-119 ◽  
Author(s):  
Min Yang ◽  
Wenting Tu ◽  
Qiang Qu ◽  
Wei Zhou ◽  
Qiao Liu ◽  
...  

Author(s):  
Yang Deng ◽  
Yuexiang Xie ◽  
Yaliang Li ◽  
Min Yang ◽  
Nan Du ◽  
...  

Answer selection and knowledge base question answering (KBQA) are two important tasks of question answering (QA) systems. Existing methods solve these two tasks separately, which requires large number of repetitive work and neglects the rich correlation information between tasks. In this paper, we tackle answer selection and KBQA tasks simultaneously via multi-task learning (MTL), motivated by the following motivations. First, both answer selection and KBQA can be regarded as a ranking problem, with one at text-level while the other at knowledge-level. Second, these two tasks can benefit each other: answer selection can incorporate the external knowledge from knowledge base (KB), while KBQA can be improved by learning contextual information from answer selection. To fulfill the goal of jointly learning these two tasks, we propose a novel multi-task learning scheme that utilizes multi-view attention learned from various perspectives to enable these tasks to interact with each other as well as learn more comprehensive sentence representations. The experiments conducted on several real-world datasets demonstrate the effectiveness of the proposed method, and the performance of answer selection and KBQA is improved. Also, the multi-view attention scheme is proved to be effective in assembling attentive information from different representational perspectives.


Sign in / Sign up

Export Citation Format

Share Document