Effects of computer-based testing on test performance and testing motivation

2012 ◽  
Vol 28 (5) ◽  
pp. 1580-1586 ◽  
Author(s):  
Yan Piaw Chua
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Wenjing Yu ◽  
Noriko Iwashita

AbstractComputer-based testing (CBT), which refers to delivering assessments with computers, has been widely used in large English proficiency tests worldwide. Despite an increasing CBT in China, limited research is available concerning whether CBT can be used for the Test for English Majors-Band 4 (TEM 4). The current study investigated whether testing mode impacted TEM 4 score and factors (i.e., computer familiarity level and attitude towards CBT) that might correlate with performance on CBT of TEM 4. Overall 92 Chinese undergraduate students were randomly assigned to one of the groups, i.e., CBT or paper-based testing (PBT), and took the test. A mixed method was employed, including (1) quantitative and qualitative analysis of test performance in two modes, as well as CBT group participants’ computer familiarity and attitudes towards the mode; and (2) thematic analysis of semi-structured interviews. The results revealed that (1) test scores in CBT and PBT were comparable; (2) two items in the computer familiarity questionnaire, i.e., comfort level of reading articles on the computer and forgetting time when using computers, positively correlated with CBT scores; and (3) participants’ attitude towards CBT did not impact test performance.


2020 ◽  
Vol 10 (1) ◽  
pp. 235-244
Author(s):  
Elena A. M. Gandini ◽  
Tania Horák

AbstractThis contribution reports on the developing and piloting of a computer-based version of the test of English as a foreign language produced by the University of Central Lancashire (UCLan), where it is currently used for the admission of international students and the subsequent evaluation of their language progress. Among other benefits, computer-based testing allows for better and individualised feedback to both teachers and students, and it can provide a more authentic test experience in light of the current digital shift that UK universities are undergoing. In particular, the qualitative improvement in the feedback available for test-takers and teachers was for us a crucial factor. Providing students with personalised feedback, that is, directly linked to their performance, has positive washforward, because it means we can guide their future learning, highlighting the areas they need to work on to improve their language skills and giving them suggestions on how to succeed in academia. Furthermore, explaining the meaning of test results in detail improves transparency and ultimately washback, as teachers can use the more accessible marking criteria, together with information on how their students performed, to review plans and schemes of work for subsequent courses.


PLoS ONE ◽  
2015 ◽  
Vol 10 (12) ◽  
pp. e0143616 ◽  
Author(s):  
Anja J. Boevé ◽  
Rob R. Meijer ◽  
Casper J. Albers ◽  
Yta Beetsma ◽  
Roel J. Bosker

Sign in / Sign up

Export Citation Format

Share Document