Automatic Grading and Adaptive Question Selection for English Language Testing

2020 ◽  
Vol 6 (3) ◽  
pp. 22-29
Author(s):  
Chitra Bhole ◽  
Jahanvi Dave ◽  
Tanaya Surve ◽  
Khushboo Thakkar
2019 ◽  
Vol 8 (2) ◽  
Author(s):  
Dinh Thi Bac Binh ◽  
Dinh Thi Kieu Trinh

The International English Language Testing System (IELTS) is recognized as an accountable tool to assess whether aperson is able to study or train in English. Every year, thousandsof students sit for IELTS. However, the number of those who arerecognized to be capable enough to take a course in English issomehow limited, especially for those who are not major inEnglish at their universities.IELTS Reading is considered as a discerning skill and it is of theequal importance to listening, speaking and writing in obtainingthe objectives of IELTS of band 6 or 6.5. Being teachers of Englishat a training institution, the authors recognize that students canmake time-saving improvements in their reading command undertheir teachers’ insightful guidance.


2014 ◽  
Vol 38 (5) ◽  
pp. 46
Author(s):  
Daniel Dunkley

In this interview Professor Green explains the work of CRELLA (the Centre for Research in English Language Learning and Assessment at the University of Bedfordshire), and its role in the improvement of language testing. The institute contributes to this effort in many ways. For example, in the field of language education they are partners in English Profile (EP: www.englishprofile.org), a collaborative research programme directed towards a graded guide to learner language at different CEFR (Common European Framework of Reference) levels, based on the 50 million word Cambridge Learner Corpus. Among other things, the EP has helped to inform the development of the CEFR-J in Japan. In this interview, Professor Green also outlines his own work, especially in the areas of washback and assessment literacy.


2007 ◽  
Vol 40 (3) ◽  
pp. 268-271

07–449Barber, Richard (Dubai Women's College, UAE), A practical model for creating efficient in-house placement tests. The Language Teacher (Japan Association for Language Teaching) 31.2 (2007), 3–7.07–450Chang, Yuh-Fang (National Chung Hsing U, Taiwan), On the use of the immediate recall task as a measure of second language reading comprehension. Language Testing (Hodder Arnold) 23.4 (2006), 520–543.07–451Hyun-Ju, Kim (U Seoul, Korea), World Englishes in language testing: A call for research. English Today (Cambridge University Press) 22.4 (2006), 32–39.07–452Mahon, Elizabeth A. (Durham Public Schools, North Carolina, USA), High-stakes testing and English language learners: Questions of validity. Bilingual Research Journal (National Association for Bilingual Education) 30.2 (2006), 479–497.07–453McCoy, Damien (Australian Centre for Education and Training, Vietnam), Utilizing students' preferred language learning strategies for IELTS test preparation. EA Journal (English Australia) 23.1 (2006), 3–13.07–454Menken, Kate (City U New York, USA), Teaching to the test: How no child left behind impacts language policy, curriculum, and instruction for English language learners. Bilingual Research Journal (National Association for Bilingual Education) 30.2 (2006), 521–547.07–455Pae, Tae-Il (Yeungnam U, China) & Gi-Pyo Park, Examining the relationship between differential item functioning and differential test functioning.Language Testing (Hodder Arnold) 23.4 (2006), 475–496.07–456Rimmer, Wayne (U Reading, UK), Measuring grammatical complexity: The Gordian knot. Language Testing (Hodder Arnold) 23.4 (2006), 497–519.07–457Rupp, André A. (Humboldt U, Berlin, Germany) Tracy Ferne & Hyeran Choi, How assessing reading comprehension with multiple-choice questions shapes the construct: A cognitive processing perspective. Language Testing (Hodder Arnold) 23.4 (2006), 441–474.07–458Vanderveen, Terry (Kangawa U, Japan), The effect of EFL students' self-monitoring on class achievement test scores. JALT Journal (Japan Association for Language Teaching) 28.2 (2006), 197–206.07–459Van Moere, Alistair (Lancaster U, UK), Validity evidence in a university group oral test. Language Testing (Hodder Arnold) 23.4 (2006), 411–440.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
M. Obaidul Hamid ◽  
Ian Hardy ◽  
Vicente Reyes

Abstract Although language test-takers have been the focus of much theoretical and empirical work in recent years, this work has been mainly concerned with their attitudes to test preparation and test-taking strategies, giving insufficient attention to their views on broader socio-political and ethical issues. This article examines test-takers’ perceptions and evaluations of the fairness, justice and validity of global tests of English, with a particular focus upon the International English Language Testing System (IELTS). Based on relevant literature and theorizing into such tests, and on self-reported test experience data gathered from test-takers (N = 430) from 49 countries, we demonstrate how test-takers experienced fairness and justice in complex ways that problematized the purported technical excellence and validity of IELTS. Even as there was some evidence of support for the test as a fair measure of students’ English capacity, the extent to which it actually reflected their language capabilities was open to question. At the same time, the participants expressed concerns about whether IELTS was a vehicle for raising revenue and for justifying immigration policies, thus raising questions about the justness of the test. The research foregrounds the importance of focusing attention upon the socio-political and ethical circumstances that currently attend large-scale, standardized English language testing.


Sign in / Sign up

Export Citation Format

Share Document