scholarly journals Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments

2017 ◽  
Vol 52 (2) ◽  
pp. 147-172
Author(s):  
Salena Sampson Anderson

Abstract While large-scale language and writing assessments benefit from a wealth of literature on the reliability and validity of specific tests and rating procedures, there is comparatively less literature that explores the specific language of second language writing rubrics. This paper provides an analysis of the language of performance descriptors for the public versions of the TOEFL and IELTS writing assessment rubrics, with a focus on linguistic agency encoded by agentive verbs and language of ability encoded by modal verbs can and cannot. While the IELTS rubrics feature more agentive verbs than the TOEFL rubrics, both pairs of rubrics feature uneven syntax across the band or score descriptors with either more agentive verbs for the highest scores, more nominalization for the lowest scores, or language of ability exclusively in the lowest scores. These patterns mirror similar patterns in the language of college-level classroom-based writing rubrics, but they differ from patterns seen in performance descriptors for some large-scale admissions tests. It is argued that the lack of syntactic congruity across performance descriptors in the IELTS and TOEFL rubrics may reflect a bias in how actual student performances at different levels are characterized.

2021 ◽  
Author(s):  
Anna Siyanova ◽  
S Spina

© 2019 Language Learning Research Club, University of Michigan In the present study, we sought to advance the field of learner corpus research by tracking the development of phrasal vocabulary in essays produced at two different points in time. To this aim, we employed a large pool of second language (L2) learners (N = 175) from three proficiency levels—beginner, elementary, and intermediate—and focused on an underrepresented L2 (Italian). Employing mixed-effects models, a flexible and powerful tool for corpus data analysis, we analyzed learner combinations in terms of five different measures: phrase frequency, mutual information, lexical gravity, delta Pforward, and delta Pbackward. Our findings suggest a complex picture, in which higher proficiency and greater exposure to the L2 do not result in more idiomatic and targetlike output, and may, in fact, result in greater reliance on low frequency combinations whose constituent words are non-associated or mutually attracted.


2019 ◽  
Vol 35 (3) ◽  
Author(s):  
Duong Thu Mai

As language assessment in Vietnam is being intensively attended to by the Ministry of Education and Training and is actually critically transformed, criterion-referenced assessment has gradually been a familiar term for language teachers, assessors and administrators. Although the name of the approach has been extensively used, most teachers of English at all levels of language education still face the challenge of identifying “criteria” for writing assessment scales. This paper attempts to provide a reference for teachers and researchers in second language writing  concerning on the major development in the field in defining this construct of “writing competence”. The paper focuses more on the existing and published literature globally on English writing teaching approaches, research and practices. These contents are reviewed and summarized into two major strands: the product-oriented considerations and the process-oriented considerations.


2017 ◽  
Vol 10 (6) ◽  
pp. 174 ◽  
Author(s):  
Rana Obeid

This small scale, quantitatively based, research study aimed at exploring one of the most debated areas in the field of Teaching English to Speakers of Other Languages (TESOL); and that is, the perceptions and attitudes of English as a Foreign Language (EFL) teachers as well as EFL learners at an English Language Institute (ELI) at a major university in the Western region of Saudi Arabia, King Abdulaziz University, towards second language writing assessment. The research study involved, randomly selected twenty-two EFL teachers and seventy-eight EFL students between the period of September 2016 and December 2016. Two, purposefully designed, twenty-item, Likert scale questionnaires were distributed amongst the teachers and students. One for the participating EFL teachers and one for the participating EFL students. Data analysis using descriptive statistical methods indicated several concerns which EFL teachers and students have with regards to the writing assessment in general and to the obstacles EFL teachers face when teaching and assessing writing. In addition, there was an indication of general resentments and strong feelings amongst the EFL students where the majority indicated that they are sometimes graded unfairly and writing assessment should take another, more holistic approach rather a narrow one. The study makes recommendations for future research.


2021 ◽  
Vol 53 (2) ◽  
pp. 11-25
Author(s):  
Sheri Dion

This paper presents a methodological critique of three empirical studies in second language (L2) French writing assessment. To distinguish key themes in French L2 writing assessment, a literature review was conducted resulting in the identification of 27 studies that were categorized into three major themes. The three studies examined in this article each represent one theme respectively. Within this analysis, the underlying constructs being measured are identified, and the strengths and limitations are deliberated.  Findings from this detailed examination suggest that three examined studies in L2 French writing assessment have significant methodological flaws that raise questions about the claims being made. From this investigation, several studyspecific  recommendations are made, and four general recommendations for improving French L2 writing assessment are offered: (1) the social setting in which L2 assessments take place ought to be a consideration (2) the difficulty of tasks and time on task should be taken into account (3) greater consistency should be used when measuring and denoting a specific level of instruction (i.e. “advanced”) and (4) universal allusions to “fluency” should be avoided when generalizing one component of L2 competency (such as writing achievement) to other aspects of L2 development. Key words: French writing, methodological critique, written assessment, language assessment, second language writing assessment


2010 ◽  
Vol 15 (4) ◽  
pp. 474-496 ◽  
Author(s):  
Xiaofei Lu

We describe a computational system for automatic analysis of syntactic complexity in second language writing using fourteen different measures that have been explored or proposed in studies of second language development. The system takes a written language sample as input and produces fourteen indices of syntactic complexity of the sample based on these measures. The system is designed with advanced second language proficiency research in mind, and is therefore developed and evaluated using college-level second language writing data from the Written English Corpus of Chinese Learners (Wen et al. 2005). Experimental results show that the system achieves very high reliability on unseen test data from the corpus. We illustrate how the system is used in an example application to investigate whether and to what extent each of these measures significantly differentiate between different proficiency levels


2020 ◽  
pp. 026553222091670 ◽  
Author(s):  
Zoltán Lukácsi

In second language writing assessment, rating scales and scores from human-mediated assessment have been criticized for a number of shortcomings including problems with adequacy, relevance, and reliability (Hamp-Lyons, 1990; McNamara, 1996; Weigle, 2002). In its testing practice, Euroexam International also detected that the rating scales for writing at B2 had limited discriminating power and did not adequately reflect finer shades of candidate ability. This study sought to investigate whether a level-specific checklist of binary choice items could be designed to yield results that accurately reflect differential degrees of ability in EFL essay writing at level B2. The participants were four language teachers working as independent raters. The study involved the task materials, operational rating scales, reported scores, and candidate scripts from the May 2017 test administration. In a mixed-methods strategy of inquiry, qualitative data from stimulated recall, think-aloud protocols, and semi-structured interviews informed statistical test and item analyses. The results indicated that the checklist items were more transparent, led to increased variance, and contributed to a more coherent candidate language profile than scores from the rating scales. The implications support the recommendation that checklists should be used for level-specific language proficiency testing (Council of Europe, 2001, p. 189).


Sign in / Sign up

Export Citation Format

Share Document