scholarly journals A Pairwise Probe for Understanding BERT Fine-Tuning on Machine Reading Comprehension

Author(s):  
Jie Cai ◽  
Zhengzhou Zhu ◽  
Ping Nie ◽  
Qian Liu
2020 ◽  
Vol 34 (05) ◽  
pp. 9628-9635
Author(s):  
Zhuosheng Zhang ◽  
Yuwei Wu ◽  
Hai Zhao ◽  
Zuchao Li ◽  
Shuailiang Zhang ◽  
...  

The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks. However, the existing language representation models including ELMo, GPT and BERT only exploit plain context-sensitive features such as character or word embeddings. They rarely consider incorporating structured semantic information which can provide rich semantics for language representation. To promote natural language understanding, we propose to incorporate explicit contextual semantics from pre-trained semantic role labeling, and introduce an improved language representation model, Semantics-aware BERT (SemBERT), which is capable of explicitly absorbing contextual semantics over a BERT backbone. SemBERT keeps the convenient usability of its BERT precursor in a light fine-tuning way without substantial task-specific modifications. Compared with BERT, semantics-aware BERT is as simple in concept but more powerful. It obtains new state-of-the-art or substantially improves results on ten reading comprehension and language inference tasks.


2021 ◽  
Author(s):  
Eduardo F. Montesuma ◽  
Lucas C. Carneiro ◽  
Adson R. P. Damasceno ◽  
João Victor F. T. de Sampaio ◽  
Romulo F. Férrer Filho ◽  
...  

This paper provides an empirical study of various techniques for information retrieval and machine reading comprehension in the context of an online education platform. More specifically, our application deals with answering conceptual students questions on technology courses. To that end we explore a pipeline consisting of a document retriever and a document reader. We find that using TF-IDF document representations for retrieving documents and RoBERTa deep learning model for reading documents and answering questions yields the best performance with respect to F-Score. In overall, without a fine-tuning step, deep learning models have a significant performance gap with comparison to previously reported F-scores on other datasets.


2019 ◽  
Author(s):  
Hongyu Li ◽  
Xiyuan Zhang ◽  
Yibing Liu ◽  
Yiming Zhang ◽  
Quan Wang ◽  
...  

2021 ◽  
Vol 1955 (1) ◽  
pp. 012072
Author(s):  
Ruiheng Li ◽  
Xuan Zhang ◽  
Chengdong Li ◽  
Zhongju Zheng ◽  
Zihang Zhou ◽  
...  

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 21279-21285
Author(s):  
Hyeon-Gu Lee ◽  
Youngjin Jang ◽  
Harksoo Kim

Author(s):  
Yuanxing Zhang ◽  
Yangbin Zhang ◽  
Kaigui Bian ◽  
Xiaoming Li

Machine reading comprehension has gained attention from both industry and academia. It is a very challenging task that involves various domains such as language comprehension, knowledge inference, summarization, etc. Previous studies mainly focus on reading comprehension on short paragraphs, and these approaches fail to perform well on the documents. In this paper, we propose a hierarchical match attention model to instruct the machine to extract answers from a specific short span of passages for the long document reading comprehension (LDRC) task. The model takes advantages from hierarchical-LSTM to learn the paragraph-level representation, and implements the match mechanism (i.e., quantifying the relationship between two contexts) to find the most appropriate paragraph that includes the hint of answers. Then the task can be decoupled into reading comprehension task for short paragraph, such that the answer can be produced. Experiments on the modified SQuAD dataset show that our proposed model outperforms existing reading comprehension models by at least 20% regarding exact match (EM), F1 and the proportion of identified paragraphs which are exactly the short paragraphs where the original answers locate.


Sign in / Sign up

Export Citation Format

Share Document