Second Language Assessment

Author(s):  
Elana Shohamy

In Language Assessment Across Modalities: Paired-Papers on Signed and Spoken Language Assessment, volume editors Tobias Haug, Wolfgang Mann, and Ute Knoch bring together—for the first time—researchers, clinicians, and practitioners from two different fields: signed language and spoken language. The volume examines theoretical and practical issues related to 12 topics ranging from test development and language assessment of bi-/multilingual learners to construct issues of second-language assessment (including the Common European Framework of Reference [CEFR]) and language assessment literacy in second-language assessment contexts. Each topic is addressed separately for spoken and signed language by experts from the relevant field. This is followed by a joint discussion in which the chapter authors highlight key issues in each field and their possible implications for the other field. What makes this volume unique is that it is the first of its kind to bring experts from signed and spoken language assessment to the same table. The dialogues that result from this collaboration not only help to establish a shared appreciation and understanding of challenges experienced in the new field of signed language assessment but also breathes new life into and provides a new perspective on some of the issues that have occupied the field of spoken language assessment for decades. It is hoped that this will open the door to new and exciting cross-disciplinary collaborations.


2021 ◽  
pp. 329-332
Author(s):  
Tobias Haug ◽  
Ute Knoch ◽  
Wolfgang Mann

This chapter is a joint discussion of key items related to scoring issues related to signed and spoken language assessment that were discussed in Chapters 9.1 and 9.2. One aspect of signed language assessment that has the potential to stimulate new research in spoken second language (L2) assessment is the scoring of nonverbal speaker behaviors. This aspect is rarely represented in the scoring criteria of spoken assessments and in many cases not even available to raters during the scoring process. The authors argue, therefore, for a broadening of the construct of spoken language assessment to also include elements of nonverbal communication in the scoring descriptors. Additionally, the importance of rater training for signed language assessments, application of Rasch analysis to investigate possible reasons of disagreement between raters, and the need to conduct research on rasting scales are discussed.


1998 ◽  
Vol 18 ◽  
pp. 208-218 ◽  
Author(s):  
Kyle Perkins

The fields of reading comprehension per se and second language reading comprehension are vast indeed, and an attempt to survey them will, of necessity, be attenuated in a chapter of this size. As a consequence, I will limit my discussion to six areas: 1) general comments concerning areas of interest in reading research and assessment, 2) the adaptation of a suitable first-language reading comprehension model for second-language assessment, 3) the reliance on a top-down model of reading comprehension, 4) the validity of multiple-choice reading comprehension tests, 5) research on behavioral anchoring, and 6) the testing of reading comprehension in a CAT (Computer Adaptive Testing) context.


2013 ◽  
Vol 30 (1) ◽  
pp. 69 ◽  
Author(s):  
Daniel Ripley

Although earlier research has examined the potential of portfolios as assessment tools, research on the use of portfolios in the context of second-language education in Canada has been limited. The goal of this study was to explore the benefits and challenges of implementing a portfolio-based language assessment (PBLA) model in Language Instruction for Newcomers to Canada (LINC) programs. Data were gathered through semistructured interviews with four LINC instructors involved in a PBLA pilot project in a large Canadian city. Similar interviews were con- ducted with a representative of Citizenship and Immigration Canada, and a de- veloper of the PBLA model. Participants identified both benefits and challenges related to PBLA implementation. Based on their feedback, recommendations for future implementation are provided.Bien que la recherche antérieure ait porté sur le potentiel des portfolios comme outils d’évaluation, la recherche sur leur emploi dans l’éducation en langue sec- onde au Canada est limitée. L’objectif de cette étude est d’explorer les bienfaits et les défis relatifs à la mise en œuvre d’un modèle d’évaluation linguistique reposant sur le portfolio (PBLA) pour la formation dans les cours de langue pour immi- grants au Canada (CLIC). Les données ont été recueillies par le biais d’entrevues semi-structurées avec quatre enseignants de CLIC impliqués dans un projet pilote PBLA dans une grande ville canadienne. Des entrevues similaires ont eu lieu auprès d’un représentant de Citoyenneté et immigration Canada et d’un développeur du modèle PBLA. Les participants ont identifié les bienfaits et les défis relatifs à la mise en œuvre du modèle PBLA. En s’appuyant sur leur rétroac- tion, on fournit des recommandations visant la mise en œuvre à l’avenir.


2005 ◽  
Vol 25 ◽  
pp. 228-242 ◽  
Author(s):  
Joan Jamieson

In the last 20 years, several authors have described the possible changes that computers may effect in language testing. Since ARAL's last review of general language testing trends (Clapham, 2000), authors in the Cambridge Language Assessment Series have offered various visions of how computer technology could alter the testing of second language skills. This chapter reflects these perspectives as it charts the paths recently taken in the field. Initial steps were made in the conversion of existing item types and constructs already known from paper-and- pencil testing into formats suitable for computer delivery. This conversion was closely followed by the introduction of computer-adaptive tests, which aim to make more, and perhaps, better, use of computer capabilities to tailor tests more closely to individual abilities and interests. Movement toward greater use of computers in assessment has been coupled with an assumption that computer-based tests should be better than their traditional predecessors, and some related steps have been taken. Corpus linguistics has provided tools to create more authentic assessments; the quest for authenticity has also motivated inclusion of more complex tasks and constructs. Both these innovations have begun to be incorporated into computer-based language tests. Natural language processing has also provided some tools for computerized scoring of essays, particularly relevant in large-scale language testing programs. Although computer use has not revolutionized all aspects of language testing, recent efforts have produced some of the research, technological advances, and improved pedagogical understanding needed to support progress.


2012 ◽  
Vol 45 (2) ◽  
pp. 234-249
Author(s):  
Stephen Stoynoff

In a recent state-of-the-art (SoA) article (Stoynoff 2009), I reviewed some of the trends in language assessment research and considered them in light of validation activities associated with four widely used international measures of L2 English ability. This Thinking Allowed article presents an opportunity to revisit the four broad areas of L2 assessment research (conceptualizations of the L2 construct, validation theory and practice, the application of technology to language assessment, and the consequences of assessment) discussed in the previous SoA and to propose tasks I believe will promote further advances in L2 assessment. Of course, the research tasks I suggest represent a personal stance and readers are encouraged to consider additional perspectives, including those expressed by Bachman (2000), Chalhoub-Deville & Deville (2005), McNamara & Roever (2006), Shaw & Weir (2007), and Stansfield (2008). Moreover, readers will find useful descriptions of current research approaches to investigating L2 assessments in Lumley & Brown (2005), Weir (2005a), Chapelle, Enright & Jamieson (2008), Lazaraton (2008), and Xi (2008).


Sign in / Sign up

Export Citation Format

Share Document