Assessing Second Language Writing
Abstract This article examines the relationship between two kinds of methods used to assess the quality of second language writing : 1) objective computerised text analysis focusing on the linguistic features of written texts, and 2) subjective evaluation performed by human raters using a combination of holistic and analytical scoring procedures. In particular, it attempts to explore the potentials and possible limitations of using computerised programs as research tools in second language writing research. The written sample consisted of a total of 132 short essays written b y ESL students enrolled in various academic programs at an American university. The first method used computerized programs to assess the written texts in terms of syntactic complexity, lexical complexity and grammatical accuracy, whereas in the second method, two ESL raters evaluated the same sample of texts by first assigning a holistic score to each piece of writing, then applying an analytical scheme to assess linguistic features at the syntactic, lexical and grammatical level as well as textual and rhetorical features at the discourse level. A series of correlation analyses were performed using the scores obtained from these two kinds of assessment procedures at the correspondent levels. The results show that a significant correlation was consistently found between these two kinds of scores at the level of grammatical accuracy, yet n o significant correlation was found in any of the other categories. The results also indicate a high level of internal consistency in t he computerized analysis.