Measuring the impact of rater negotiation in writing performance assessment

2016 ◽  
Vol 34 (1) ◽  
pp. 3-22 ◽  
Author(s):  
Jonathan Trace ◽  
Gerriet Janssen ◽  
Valerie Meier

Previous research in second language writing has shown that when scoring performance assessments even trained raters can exhibit significant differences in severity. When raters disagree, using discussion to try to reach a consensus is one popular form of score resolution, particularly in contexts with limited resources, as it does not require adjudication by at third rater. However, from an assessment validation standpoint, questions remain about the impact of negotiation on the scoring inference of a validation argument (Kane, 2006, 2012). Thus, this mixed-methods study evaluates the impact of score negotiation on scoring consistency in second language writing assessment, as well as negotiation’s potential contributions to raters’ understanding of test constructs and the local curriculum. Many-faceted Rasch measurement (MFRM) was used to analyze scores ( n = 524) from the writing section an EAP placement exam and to quantify how negotiation affected rater severity, self-consistency, and bias toward individual categories and test takers. Semi-structured interviews with raters ( n = 3) documented their perspectives about how negotiation affects scoring and teaching. In this study, negotiation did not change rater severity, though it greatly reduced measures of rater bias. Furthermore, rater comments indicated that negotiation supports a nuanced understanding of the rubric categories and increases positive washback on teaching practices.

2021 ◽  
Vol X (3) ◽  
pp. 66-73
Author(s):  
Liliya Makovskaya ◽  

Feedback has always been considered important in second language writing. Quite recently due to various reasons, electronic feedback has become one of the frequently applied types (Zareekbatani, 2015; Ene & Upton, 2018). The aim of the research study was therefore to identify lecturers’ and students’ views on the use of online comments provided on the second language writing tasks. The data was collected through conducting online semi-structured interviews with undergraduate students and lecturers of one Uzbek university. The findings revealed that a variety of comments given on different aspects of the written assessment tasks in the Google documents and combined with additional oral feedback were effective. The article aims at discussing the detailed findings of the research study and providing possible suggestions for language teachers on the use of electronic feedback in L2 writing.


2019 ◽  
Vol 35 (3) ◽  
Author(s):  
Duong Thu Mai

As language assessment in Vietnam is being intensively attended to by the Ministry of Education and Training and is actually critically transformed, criterion-referenced assessment has gradually been a familiar term for language teachers, assessors and administrators. Although the name of the approach has been extensively used, most teachers of English at all levels of language education still face the challenge of identifying “criteria” for writing assessment scales. This paper attempts to provide a reference for teachers and researchers in second language writing  concerning on the major development in the field in defining this construct of “writing competence”. The paper focuses more on the existing and published literature globally on English writing teaching approaches, research and practices. These contents are reviewed and summarized into two major strands: the product-oriented considerations and the process-oriented considerations.


2019 ◽  
Author(s):  
Sundus Ziad AlKadi ◽  
Abeer Ahmed Madini

With new technology, writing became a skill that is being developed year after year. The present study questions whether there is a difference between paper-based and computer-based writing in terms of errors and lexico-grammar. It aims at exploring sentence-level errors and lexico-grammatical competence in two writing genres in a collaborative writing environment within paper-based and computer-based writing. A sample of 73 female intermediate level learners participated in the study at the University of Business and Technology (UBT), in Saudi Arabia. This mixed-methods research is significant in the literature of second language writing since it highlights genre awareness, lexico-grammatical competence, analyzing errors, and collaboration in two styles of writing. The reading-based writing tasks acted as a reflection of the learners' lexico-grammatical competence on paper and via Web 2.0 tool (Padlet). Statistically, the Mann-Whitney U-tests showed that there was no significant difference between paper-based and computer-based groups in the sentence-level errors in narrative genre, whereas there was a significant difference between the two different tools of writing groups in the sentence-level errors in opinion genre. However, there was no significant difference between paper-based and computer-based groups in the clauses (lexico-grammar) of the two groups. Immediate semi-structured interviews were conducted and analyzed through NVIVO to get more insights from the learners to explain the comparison between the paper-based and the computer-based writing. In light of the significant findings, implications are sought to create an equillibrium between paper-based and computer-based writing, along with enhancing collaboration in second language writing.


Author(s):  
Amir Rezaei ◽  
Khaled Barkaoui

Abstract This study aimed to compare second-language (L2) students’ ratings of their peers’ essays on multiple criteria with those of their teachers’ under different assessment conditions. Forty EFL teachers and 40 EFL students took part in the study. They each rated one essay on five criteria twice, under high-stakes and low-stakes assessment conditions. Multifaceted Rasch Analysis and correlation analyses were conducted to compare rater severity and consistency across rater groups, rating criteria and assessment conditions. The results revealed that there was more variation in students’ ratings than the teachers’ across assessment conditions. Additionally, both rater groups had different degrees of severity in assessing different criteria. In general, students were significantly more severe on language use than were teachers; whereas teachers were significantly more severe than were peers on organization. Student and teacher severity also varied across rating criteria and assessment conditions. The findings of this study have implications for planning and implementing peer assessment in the L2 writing classroom as well as for future research.


2021 ◽  
Vol 44 (2) ◽  
pp. 131-165
Author(s):  
Rod Ellis

Abstract There are both pedagogical and theoretical grounds for asking second language writers to plan before they start writing. The question then arises whether pre-task planning (PTP) improves written output. To address this question, this article reviewed 32 studies by comparing the effect of PTP either with no planning or with unpressured online planning (OLP). These studies also investigated the moderating effect of variables relating to the writer participants, the nature of the planning, and the writing tasks. The main findings are: (1) There is no clear evidence that PTP leads to better overall writing quality when this is measured using rating rubrics, (2) PTP generally results in more fluent writing, (3) its impact on syntactical and lexical complexity is inconsistent and negligible, (4) OLP does sometimes result in increased linguistic accuracy, and (5) there is insufficient evidence to reach clear conclusions about the role that moderating variables have on the impact of PTP, but the results suggest that collaborative (as opposed to individual planning) can lead to increased accuracy and that PTP tends to lead to more complex language when the writing task is a complex one. The article concludes with a set of principles to ensure better quality research and three general proposals for the kind of future research needed.


2017 ◽  
Vol 10 (6) ◽  
pp. 174 ◽  
Author(s):  
Rana Obeid

This small scale, quantitatively based, research study aimed at exploring one of the most debated areas in the field of Teaching English to Speakers of Other Languages (TESOL); and that is, the perceptions and attitudes of English as a Foreign Language (EFL) teachers as well as EFL learners at an English Language Institute (ELI) at a major university in the Western region of Saudi Arabia, King Abdulaziz University, towards second language writing assessment. The research study involved, randomly selected twenty-two EFL teachers and seventy-eight EFL students between the period of September 2016 and December 2016. Two, purposefully designed, twenty-item, Likert scale questionnaires were distributed amongst the teachers and students. One for the participating EFL teachers and one for the participating EFL students. Data analysis using descriptive statistical methods indicated several concerns which EFL teachers and students have with regards to the writing assessment in general and to the obstacles EFL teachers face when teaching and assessing writing. In addition, there was an indication of general resentments and strong feelings amongst the EFL students where the majority indicated that they are sometimes graded unfairly and writing assessment should take another, more holistic approach rather a narrow one. The study makes recommendations for future research.


2021 ◽  
Vol 53 (2) ◽  
pp. 11-25
Author(s):  
Sheri Dion

This paper presents a methodological critique of three empirical studies in second language (L2) French writing assessment. To distinguish key themes in French L2 writing assessment, a literature review was conducted resulting in the identification of 27 studies that were categorized into three major themes. The three studies examined in this article each represent one theme respectively. Within this analysis, the underlying constructs being measured are identified, and the strengths and limitations are deliberated.  Findings from this detailed examination suggest that three examined studies in L2 French writing assessment have significant methodological flaws that raise questions about the claims being made. From this investigation, several studyspecific  recommendations are made, and four general recommendations for improving French L2 writing assessment are offered: (1) the social setting in which L2 assessments take place ought to be a consideration (2) the difficulty of tasks and time on task should be taken into account (3) greater consistency should be used when measuring and denoting a specific level of instruction (i.e. “advanced”) and (4) universal allusions to “fluency” should be avoided when generalizing one component of L2 competency (such as writing achievement) to other aspects of L2 development. Key words: French writing, methodological critique, written assessment, language assessment, second language writing assessment


Sign in / Sign up

Export Citation Format

Share Document