Putting Students at the Centre of Classroom L2 Writing Assessment

Author(s):  
Icy Lee

2008 ◽  
Vol 13 (3) ◽  
pp. 153-170 ◽  
Author(s):  
Cecilia Guanfang Zhao ◽  
Lorena Llosa


2007 ◽  
Vol 24 (1) ◽  
pp. 37-64 ◽  
Author(s):  
Catherine Elder ◽  
Gary Barkhuizen ◽  
Ute Knoch ◽  
Janet von Randow




2021 ◽  
Vol 53 (2) ◽  
pp. 11-25
Author(s):  
Sheri Dion

This paper presents a methodological critique of three empirical studies in second language (L2) French writing assessment. To distinguish key themes in French L2 writing assessment, a literature review was conducted resulting in the identification of 27 studies that were categorized into three major themes. The three studies examined in this article each represent one theme respectively. Within this analysis, the underlying constructs being measured are identified, and the strengths and limitations are deliberated.  Findings from this detailed examination suggest that three examined studies in L2 French writing assessment have significant methodological flaws that raise questions about the claims being made. From this investigation, several studyspecific  recommendations are made, and four general recommendations for improving French L2 writing assessment are offered: (1) the social setting in which L2 assessments take place ought to be a consideration (2) the difficulty of tasks and time on task should be taken into account (3) greater consistency should be used when measuring and denoting a specific level of instruction (i.e. “advanced”) and (4) universal allusions to “fluency” should be avoided when generalizing one component of L2 competency (such as writing achievement) to other aspects of L2 development. Key words: French writing, methodological critique, written assessment, language assessment, second language writing assessment



2017 ◽  
Vol 34 (4) ◽  
pp. 493-511 ◽  
Author(s):  
Xiaofei Lu

Research investigating corpora of English learners’ language raises new questions about how syntactic complexity is defined theoretically and operationally for second language (L2) writing assessment. I show that syntactic complexity is important in construct definitions and L2 writing rating scales as well as in L2 writing research. I describe the operationalizations of syntactic complexity measurement in corpus-based L2 writing research, focusing on the Biber Tagger (Biber, Johansson, Leech, Conrad, & Finegan, 1999), Coh-Metrix (McNamara, Graesser, McCarthy, & Cai, 2014), and L2 Syntactic Complexity Analyzer (Lu, 2010), which are three tools commonly used to automate syntactic complexity analysis. A review of findings from recent corpus-based L2 writing studies on the relationship of syntactic complexity to L2 writing quality follows. I conclude with a discussion of the implications of these multiple perspectives on the definition of syntactic complexity in L2 studies.



2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Hyunwoo Kim

Abstract The halo effect is raters’ undesirable tendency to assign more similar ratings across rating criteria than they should. The impacts of the halo effect on ratings have been studied in rater-mediated L2 writing assessment. Little is known, however, about the extent to which rating criteria order in analytic rating scales is associated with the magnitude of the group- and individual-level halo effects. Thus, this study attempts to examine the extent to which the magnitude of the halo effect is associated with rating criteria order in analytic rating scales. To select essays untainted by the effects of rating criteria order, a balanced Latin square design was implemented along with the employment of four expert raters. Next, 11 trained novice Korean raters rated the 30 screened essays with respect to the four rating criteria in three different rating orders: standard-, reverse-, and random-order. A three-facet rating scale model (L2 writer ability, rater severity, criterion difficulty) was fitted to estimate the group- and individual-level halo effects. The overall results of this study showed that the similar magnitude of the group-level halo effect was detected in the standard- and reverse-order rating rubrics while the random presentation of rating criteria decreased the group-level halo effect. A theoretical implication of the study is the necessity of considering rating criteria order as a source of construct-irrelevant easiness or difficulty when developing analytic rating scales.



Sign in / Sign up

Export Citation Format

Share Document