Computer-Assisted Language Testing and Learner Behavior

Author(s):  
Brett Milliner ◽  
Blair Barr
2011 ◽  
Vol 4 (2) ◽  
Author(s):  
Jesus Garcia Laborda ◽  
Mary Frances Litzler

Author(s):  
Ethan Douglas Quaid ◽  
Alex Barrett

Semi-direct speaking tests have become an increasingly favored method of assessing spoken performance in recent years. Underpinning evidence for their continued development and use has been largely contingent on language testing and assessment researchers' claim of their interchangeability with more traditional, direct face-to-face oral proficiency interviews through theoretical and empirical investigations from multiple perspectives. This chapter initially provides background and research synopses of four significant test facets that have formed the bases for semi-direct and direct speaking test comparison studies. These are followed by the inclusion of a recent case study comparing test taker output from a computer-based Aptis speaking test and a purposively developed identical face-to-face oral proficiency interview that found a slight register shift which may be viewed as advantageous for semi-direct speaking tests. Finally, future research directions are proposed in light of the recent developments in the semi-direct speaking testing research presented throughout this chapter.


2004 ◽  
Vol 37 (1) ◽  
pp. 66-69

04–83Akiyama, Tomoyasu (U. Melbourne, Australia). Assessing speaking: issues in school-based assessment and the introduction of speaking tests into the Japanese senior high school entrance examination. JALT Journal (Tokyo, Japan), 25, 2 (2003), 117–141.04–84Chiang, Steve (Yuan Ze University, Taiwan). The importance of cohesive conditions to perceptions of writing quality at the early stages of foreign language learning. System (Oxford, UK), 31 (2003), 471–484.04–85Escamilla, Kathy, Mahon, Elizabeth, Riley-Bernal, Heather and Rutledge, David (U. of Colorado, Boulder, USA). High-stakes testing, Latinos, and English language learners: lessons from Colorado. Bilingual Research Journal (Arizona, USA), 27, 1 (2003), 25–49.04–86Gorsuch, Greta (Texas Tech U., USA; Email: [email protected]). Test takers' experiences with computer-administered listening comprehension tests: interviewing for qualitative explorations of test validity. Calico Journal (Texas, USA), 21, 2 (2004), 339–371.04–87Hardcastle, Peter.How to not test language (Part 2). Language Testing Update (Lancaster, UK), 33 (2003), 28–35.04–88Hemard, D. and Cushion, S. (London Metropolitan, University, UK; Email: [email protected]). Design and evaluation of an online test: assessment conceived as a complementary CALL tool. Computer Assisted Language Learning (Lisse, The Netherlands), 16, 2–3 (2003), 119–139.04–89Ishii, David N. and Baba, Kyoko (U. of Toronto, Canada; Email: [email protected]). Locally developed oral skills evaluation in ESL/EFL classrooms: a checklist for developing meaningful assessment procedures. TESL Canada Journal/Revue TESL du Canada (Burnaby, Canada), 21, 1 (2003), 79–96.04–90Iwashita, Noriko and Grove, Elizabeth (University of Melbourne, Australia). A comparison of analytic and holistic scales in the context of a specific-purpose speaking test. Prospect (Sydney, Australia), 18, 3 (2003), 25–35.04–91Lee, Yong-Won (Educational Testing Service, Princeton, NJ, US; Email: [email protected]). Examining passage-related local item dependence (LID) and measurement construct using Q3statistics in an EFL reading comprehension test. Language Testing (London, UK), 21, 1 (2004), 74–100.04–92Qian, David D. (Hong Kong Polytechnic U., Hong Kong; Email: [email protected]) and Schedl, Mary (Educational Testing Service, Princeton, NJ, US). Evaluation of an in-depth vocabulary knowledge measure for assessing reading performance. Language Testing (London, UK), 21, 1 (2004), 28–52.04–93Rea-Dickins, Pauline (University of Bristol, UK). Classroom assessment of English as an additonal language: Key stage 1 contexts – summary of research findings. Language Testing Update (Lancaster, UK), 33 (2003), 48–53.04–94Rodgers, Catherine, Meara, Paul and Jacobs, Gabriel (U. of Wales Swansea, UK). Factors affecting the standardisation of translation examinations. Language Learning Journal (London, UK), 28 (Winter 2003), 49–54.


ReCALL ◽  
2002 ◽  
Vol 14 (1) ◽  
pp. 167-181 ◽  
Author(s):  
MARIE J. MYERS

With innovative ways available to assess language performance through the use of computer technology, practitioners have to rethink their preferred strategies of language testing. It is necessary to take into account both the new developments in language learning and teaching research and also the latest features computers have to offer to help with language assessment. In addition to best practices developed over the years in the field, it is necessary for provision to be made for authentic assessments of intercultural communication abilities. After a review of the latest language-testing literature and a discussion of the current problems identified in it, this paper explores the latest developments in computer technology and proposes areas of language testing in the light of the new findings. A practical application follows. This is an adaptation, in a school board in Ontario, of the latest evaluation model. The model represents unit planning as an isosceles triangle with assessed assignments stacked in horizontal bands from the base to the vertex, i.e. the top. The suggestion is offered that this approach can be enriched, by changing the triangle into a pyramid with a different model on each side. Access to the four sides by rotation of the pyramid allows a broader range of activities culminating in one final assessment task at the summit.


ExELL ◽  
2017 ◽  
Vol 5 (1) ◽  
pp. 55-70
Author(s):  
Zaha Alonazi

Abstract Computerized dynamic assessment (CDA) posits itself as a new type of assessment that includes mediation in the assessment process. Proponents of dynamic assessment (DA) in general and CDA in particular argue that the goals of DA are in congruence with the concept of validity that underscores the social consequences of test use and the integration of learning and assessment (Sternberg & Grigorenko, 2002; Poehner, 2008; Shabani, 2012;). However, empirical research on CDA falls short in supporting such an argument. Empirical studies on CDA are riddled with ill-defined constructs and insufficient supporting evidence in regard to the aspects of validity postulated by Messick (1989, 1990, 1996). Due to the scarcity of research on CDA, this paper explores the potentials and the viability of this intervention-based assessment in computer assisted language testing context in light of its conformity with Messick’s unitary view of validity. The paper begins with a discussion of the theoretical foundations and models of DA. It then proceeds to discuss the differences between DA and non-dynamic assessment (NDA) measures before critically appraising the empirical studies on CDA. The critical review of the findings in CDA literature aims at shedding light on some drawbacks in the design of CDA research and the compatibility of the concept of construct validity in CDA with Messick’s (1989) unitary concept of validity. The review of CDA concludes with some recommendations for rectifying gaps to establish CDA in a more prominent position in computerized language testing.


2016 ◽  
Vol 7 (1) ◽  
pp. 989-996
Author(s):  
Masoud Yaghoubi Notash ◽  
Maryam Mahmoodi

The sweeping trend of communication technologies increasingly makes computers and computer-based operations an indispensable part of our everyday lives. Education as a institution deeply rooted within the context of the broader community can, therefore, hardly afford to do without the digital era. Assessment, as a subcategory of education, is expected to be under the influence of the same trend. The present study, focusing on the EFL situation in Iran compared two equal-sized advanced learners’  each containing 20) writing accuracy and length. One group employed the traditional paper-and-pencil  PPT) mode while the other used computer-assisted language testing  CALT).  Results indicated a significantly more accurate writing for the CALT group, while a longer writing production for the PPT group. Implications of the study are discussed.


Author(s):  
Ruslan Suvorov ◽  
Volker Hegelheimer

Sign in / Sign up

Export Citation Format

Share Document