scholarly journals Comparison between CBT and PBT: Assessment of Gap-filling and Multiple-choice Cloze in Reading Comprehension

Author(s):  
Mo Li ◽  
Haifeng Pu
2021 ◽  
pp. 136216882110115
Author(s):  
Ali Amjadi ◽  
Seyed Hassan Talebi

Implementing social-emotional learning skills into Collaborative Strategic Reading (CSR), the current study intended to extend the efficacy of CSR for teaching reading strategies when applying it to students in rural areas from a working-class community. To this purpose, forty-four students who made the comparison and the experimental groups were taught reading strategies through CSR and ECSR (Extended Collaborative Strategic Reading), respectively. A reading comprehension test with different question types was implemented to the students as pretest and posttest, and an interview was given at the end of the study to investigate the perception of the students toward reading strategy instruction through CSR and ECSR. Analysis of data indicated that only the ECSR group improved significantly in overall reading comprehension, but the componential analysis of the reading test showed that despite the fact that the CSR group showed no significant improvement in the reading tests in four formats (true–false, multiple-choice, matching, and cloze), the ECSR group improved significantly in reading tests with multiple-choice and cloze test formats. Moreover, although the students in both groups showed a positive view toward the interventions, the students in the ECSR group improved in social-emotional and communication skills. It seems that CSR can be improved to be effective by implementing the emotional component to it.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Changchang Zeng ◽  
Shaobo Li

Machine reading comprehension (MRC) is a challenging natural language processing (NLP) task. It has a wide application potential in the fields of question answering robots, human-computer interactions in mobile virtual reality systems, etc. Recently, the emergence of pretrained models (PTMs) has brought this research field into a new era, in which the training objective plays a key role. The masked language model (MLM) is a self-supervised training objective widely used in various PTMs. With the development of training objectives, many variants of MLM have been proposed, such as whole word masking, entity masking, phrase masking, and span masking. In different MLMs, the length of the masked tokens is different. Similarly, in different machine reading comprehension tasks, the length of the answer is also different, and the answer is often a word, phrase, or sentence. Thus, in MRC tasks with different answer lengths, whether the length of MLM is related to performance is a question worth studying. If this hypothesis is true, it can guide us on how to pretrain the MLM with a relatively suitable mask length distribution for MRC tasks. In this paper, we try to uncover how much of MLM’s success in the machine reading comprehension tasks comes from the correlation between masking length distribution and answer length in the MRC dataset. In order to address this issue, herein, (1) we propose four MRC tasks with different answer length distributions, namely, the short span extraction task, long span extraction task, short multiple-choice cloze task, and long multiple-choice cloze task; (2) four Chinese MRC datasets are created for these tasks; (3) we also have pretrained four masked language models according to the answer length distributions of these datasets; and (4) ablation experiments are conducted on the datasets to verify our hypothesis. The experimental results demonstrate that our hypothesis is true. On four different machine reading comprehension datasets, the performance of the model with correlation length distribution surpasses the model without correlation.


2020 ◽  
Vol 8 ◽  
pp. 141-155
Author(s):  
Kai Sun ◽  
Dian Yu ◽  
Dong Yu ◽  
Claire Cardie

Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations. We present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especiallyon problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text. C3 is available at https://dataset.org/c3/ .


Author(s):  
Zhipeng Chen ◽  
Yiming Cui ◽  
Wentao Ma ◽  
Shijin Wang ◽  
Guoping Hu

Machine Reading Comprehension (MRC) with multiplechoice questions requires the machine to read given passage and select the correct answer among several candidates. In this paper, we propose a novel approach called Convolutional Spatial Attention (CSA) model which can better handle the MRC with multiple-choice questions. The proposed model could fully extract the mutual information among the passage, question, and the candidates, to form the enriched representations. Furthermore, to merge various attention results, we propose to use convolutional operation to dynamically summarize the attention values within the different size of regions. Experimental results show that the proposed model could give substantial improvements over various state-of- the-art systems on both RACE and SemEval-2018 Task11 datasets.


SAGE Open ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 215824401986149 ◽  
Author(s):  
Abdel Rahman Mitib Altakhaineh ◽  
Mona Kamal Ibrahim

This study examined the incidental acquisition of English prepositions by Arabic-speaking English as a foreign language (EFL) learners. Employing reading comprehension exercises as a treatment, we adopted the experimental design of a pre- and post-test to determine the effectiveness of the treatment on the participants’ incidental acquisition of English prepositions. For the purpose of the study, we divided the participants into a treatment group, who engaged in reading comprehension exercises for one academic term, and a control group, who did not. We used a multiple-choice test and a fill-in-the-blank test to measure the participants’ receptive and productive knowledge of English prepositions, respectively. We also conducted an introspective session with the treatment group following the administration of the post-tests to determine the areas of difficulty. The results of the study mainly indicated that reading accompanied by exercises resulted in better incidental gains in the acquisition of English prepositions, especially on the multiple-choice test. The study concludes with recommendations for further research.


Sign in / Sign up

Export Citation Format

Share Document