scholarly journals Analysis of English Multitext Reading Comprehension Model Based on Deep Belief Neural Network

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Qiaohui Tang

In order to solve the problems of low accuracy and low efficiency of answer prediction in machine reading comprehension, a multitext English reading comprehension model based on the deep belief neural network is proposed. Firstly, the paragraph selector in the multitext reading comprehension model is constructed. Secondly, the text reader is designed, and the deep belief neural network is introduced to predict the question answering probability. Finally, the popular English dataset of SQuAD is used for test analysis. The final results show that, after the comparative analysis of different learning methods, it is found that the English multitext reading comprehension model has a strong reading comprehension ability. In addition, two evaluation methods are used to score the overall performance of the model, which shows that the overall score of the English multitext reading comprehension model based on the deep confidence neural network is more than 90, and the efficiency will not be reduced because of the change of the number of documents in the dataset. The above results show that the use of the deep belief neural network to improve the probability generation performance of the model can well solve the task of English multitext reading comprehension, effectively reduce the difficulty of machine reading comprehension in multitask reading, and has a good guiding significance for promoting human convenient Internet knowledge acquisition.

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Changchang Zeng ◽  
Shaobo Li

Machine reading comprehension (MRC) is a challenging natural language processing (NLP) task. It has a wide application potential in the fields of question answering robots, human-computer interactions in mobile virtual reality systems, etc. Recently, the emergence of pretrained models (PTMs) has brought this research field into a new era, in which the training objective plays a key role. The masked language model (MLM) is a self-supervised training objective widely used in various PTMs. With the development of training objectives, many variants of MLM have been proposed, such as whole word masking, entity masking, phrase masking, and span masking. In different MLMs, the length of the masked tokens is different. Similarly, in different machine reading comprehension tasks, the length of the answer is also different, and the answer is often a word, phrase, or sentence. Thus, in MRC tasks with different answer lengths, whether the length of MLM is related to performance is a question worth studying. If this hypothesis is true, it can guide us on how to pretrain the MLM with a relatively suitable mask length distribution for MRC tasks. In this paper, we try to uncover how much of MLM’s success in the machine reading comprehension tasks comes from the correlation between masking length distribution and answer length in the MRC dataset. In order to address this issue, herein, (1) we propose four MRC tasks with different answer length distributions, namely, the short span extraction task, long span extraction task, short multiple-choice cloze task, and long multiple-choice cloze task; (2) four Chinese MRC datasets are created for these tasks; (3) we also have pretrained four masked language models according to the answer length distributions of these datasets; and (4) ablation experiments are conducted on the datasets to verify our hypothesis. The experimental results demonstrate that our hypothesis is true. On four different machine reading comprehension datasets, the performance of the model with correlation length distribution surpasses the model without correlation.


2020 ◽  
Author(s):  
Marie-Anne Xu ◽  
Rahul Khanna

Recent progress in machine reading comprehension and question-answering has allowed machines to reach and even surpass human question-answering. However, the majority of these questions have only one answer, and more substantial testing on questions with multiple answers, or multi-span questions, has not yet been applied. Thus, we introduce a newly compiled dataset consisting of questions with multiple answers that originate from previously existing datasets. In addition, we run BERT-based models pre-trained for question-answering on our constructed dataset to evaluate their reading comprehension abilities. Among the three of BERT-based models we ran, RoBERTa exhibits the highest consistent performance, regardless of size. We find that all our models perform similarly on this new, multi-span dataset (21.492% F1) compared to the single-span source datasets (~33.36% F1). While the models tested on the source datasets were slightly fine-tuned, performance is similar enough to judge that task formulation does not drastically affect question-answering abilities. Our evaluations indicate that these models are indeed capable of adjusting to answer questions that require multiple answers. We hope that our findings will assist future development in questionanswering and improve existing question-answering products and methods.


2016 ◽  
Vol 6 (7) ◽  
pp. 159
Author(s):  
M. Rahim Bohlooli Niri

<p>The purpose of the present study is to investigate the relationship between successful readers’ strategies in Persian and English languages, and the impact of instruction of such strategies on English reading comprehension ability. The present study relies on Casanave’s (1998) expanded view of schema theory, the strategy schema, Goodman’s (1971) language transfer or linguistic independent hypothesis and Clarke’s idea of short-circuit or language ceiling hypothesis in ESL or EFL. This study also aims at finding an answer to the question of reading problem versus language problem, first raised by Alderson (1984, pp. 1-27) and then followed by Carrell (1991, pp. 159-179).</p>


2020 ◽  
Vol 34 (10) ◽  
pp. 13987-13988
Author(s):  
Xuanyu Zhang ◽  
Zhichun Wang

Most of models for machine reading comprehension (MRC) usually focus on recurrent neural networks (RNNs) and attention mechanism, though convolutional neural networks (CNNs) are also involved for time efficiency. However, little attention has been paid to leverage CNNs and RNNs in MRC. For a deeper understanding, humans sometimes need local information for short phrases, sometimes need global context for long passages. In this paper, we propose a novel architecture, i.e., Rception, to capture and leverage both local deep information and global wide context. It fuses different kinds of networks and hyper-parameters horizontally rather than simply stacking them layer by layer vertically. Experiments on the Stanford Question Answering Dataset (SQuAD) show that our proposed architecture achieves good performance.


2021 ◽  
Author(s):  
Samreen Ahmed ◽  
shakeel khoja

<p>In recent years, low-resource Machine Reading Comprehension (MRC) has made significant progress, with models getting remarkable performance on various language datasets. However, none of these models have been customized for the Urdu language. This work explores the semi-automated creation of the Urdu Question Answering Dataset (UQuAD1.0) by combining machine-translated SQuAD with human-generated samples derived from Wikipedia articles and Urdu RC worksheets from Cambridge O-level books. UQuAD1.0 is a large-scale Urdu dataset intended for extractive machine reading comprehension tasks consisting of 49k question Answers pairs in question, passage, and answer format. In UQuAD1.0, 45000 pairs of QA were generated by machine translation of the original SQuAD1.0 and approximately 4000 pairs via crowdsourcing. In this study, we used two types of MRC models: rule-based baseline and advanced Transformer-based models. However, we have discovered that the latter outperforms the others; thus, we have decided to concentrate solely on Transformer-based architectures. Using XLMRoBERTa and multi-lingual BERT, we acquire an F<sub>1</sub> score of 0.66 and 0.63, respectively.</p>


2020 ◽  
Vol 34 (05) ◽  
pp. 8010-8017 ◽  
Author(s):  
Di Jin ◽  
Shuyang Gao ◽  
Jiun-Yu Kao ◽  
Tagyoung Chung ◽  
Dilek Hakkani-tur

Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets.


Sign in / Sign up

Export Citation Format

Share Document