writing assessment
Recently Published Documents


TOTAL DOCUMENTS

601
(FIVE YEARS 167)

H-INDEX

25
(FIVE YEARS 2)

2022 ◽  
Vol 12 (1) ◽  
pp. 55-64
Author(s):  
Shifa Alotibi ◽  
Abdullah Alshakhi

This study seeks to explore the factors that influence EFL instructors’ rating decisions while using holistic and analytic rubrics. Few studies have been conducted on the factors that influence the rating practices of EFL instructors, specifically, in the Saudi EFL context. This study addresses this gap and contributes more broadly to the understanding of the interplay between EFL instructors and the use of holistic and analytic rubrics. The data were collected in a Saudi university at a preparatory year program (PYP). This study utilizes semi-structured interviews with eleven EFL instructors from different nationalities. Guided by the critical language testing as a theoretical framework and with qualitative analysis, the study reveals that critical language testing can minimize the negative consequences of writing assessment done by graders; however, students’ low English proficiency, time constraints, heavy workload can negatively affect the rating practices. Finally, several pedagogical implications, insights, and recommendations for future research are put forward in the conclusion.


Author(s):  
Fitri Handayani ◽  
Hermawati Syarif

Assessment is at the core of the teaching process. It shapes students’ understanding of the curriculum and determines their ability to progress. Choosing an assessment strategy is an important aspect of the curriculum planning process. Hence, in the context of this shift from face-to face learning to full-time online learning, several challenges arose including how to develop online writing assessment to the students. In particular, online assessment of students’ writing has become an unprecedented new situation for many English lecturers. The transition from face-to-face assessment to online assessment has been a new experience for many English lecturers who have never applied it before nor have background knowledge of its mechanisms and methods. The issue has raised an important point for English teaching practitioners and course designers regarding the strategies and challenges of this mandatory mode of assessment. From this perspective, the purpose of this paper is to provide an overview of online writing assessments in the COVID-19 pandemic era, including challenges faced by lecturers in conducting online assessments, as well as a set of recommendations for designing online assessment mechanisms and strategies that will result in a fair assessment process for all.


2021 ◽  
Vol 15 (1) ◽  
pp. 16
Author(s):  
Xuefeng Wu

Rating scales for writing assessment are critical in that they determine directly the quality and fairness of such performance tests. However, in many EFL contexts, rating scales are made, to certain extent, based on the intuition of teachers who strongly need a feasible and scientific route to guide their construction of rating scales. This study aims to design an operational model of rating scale construction with English summary writing as an example. Altogether 325 university English teachers, 4 experts in language assessment and 60 English majors in China participated in the study. 20 textual attributes were extracted, through text analysis, from China’s Standards of English Language Ability (CSE), theoretical construct of summary writing, comments on sample summary writing essays from 8 English teachers and their personal judgement. The textual attributes were then investigated through a large-scale questionnaire survey. Exploratory factor analysis and expert judgement were employed to determine rating scale dimensions. Regression analysis and expert judgement were conducted to determine the weighting distribution across all dimensions. Based on such endeavors, a tentative operational model of rating scale construction was established, which can also be applied and adapted to develop rating scales in other writing assessment. 


2021 ◽  
Vol 2 (1) ◽  
pp. 22-31
Author(s):  
Jimalee Sowell ◽  

While the field of composition might like to believe it has moved on from concern about the five-paragraph essay, the debate is far from over. The five-paragraph essay continues to be taught, and oppositionists continue to rail against it. As long as the five-paragraph essay continues as a common form of writing assessment on standardized exams and as a form commonly taught in schools, it is a form that will likely persist. Instead of calling for the retirement of the five-paragraph essay, practitioners and researchers need to rethink the potential of the five-paragraph essay as a foundational form and to reconsider approaches to teaching it. Some of the problems associated with the five-paragraph essay are likely due to pedagogical decisions, such as an exclusive focus on the five-paragraph essay and not advancing to other forms when students are ready rather the five-paragraph essay form itself. In this paper, I define the five-paragraph essay, outline some of the historical links to the five-paragraph essay, challenge common criticisms of it, and suggest that such essay might be a useful foundational form.


2021 ◽  
Vol 11 (12) ◽  
pp. 795
Author(s):  
Michael Dunn

Writing is a necessary skill in our technological world. Many people have a mobile device that they use for e-mailing, social media, as an alarm clock to start the day, reading the news, searching for information, ordering food, managing transportation (e.g., monitoring traffic, accessing public transit), or for relaxing pursuits, such as watching a movie or listening to music. While these tasks are natural and almost effortless for numerous people, many students struggle with composing longer prose, especially for academic tasks. The 2021 U.S. National Assessment of Educational Progress for Writing, for example, indicates that as many as 75% of students cannot write at a basic level. In this article, the author discusses recent examples from the professional literature about why writing can be a challenge for students, what is involved in writing assessment, how we can help students improve their writing skills, and how we can promote technology as part of the instruction and learning processes.


2021 ◽  
Vol 2 (1) ◽  
pp. 32-39
Author(s):  
Sumara Suzzette Prince

Traditional methods of assessing university students' speaking and writing abilities, especially those in creative design fields, can be perceived both impractical and monotonous. This study aims to show college students' perception of the degree of effectiveness of the tools currently being used to assess them, either through authentic assessment or through standardized testing, and whether or not anxiety plays any role in their performance. 21 graphic design students at a private university in Madrid taking a course in advanced English for Specific purposes (ESP) completed the survey. The survey, mostly qualitative, asked students to evaluate how effective were the different forms of authentic assessment, both in speaking and writing, compared to the standardized tests they were mainly and currently evaluated on. The results of the survey found that students, in general, deemed the various forms of authentic assessment more effective, albeit not in a significant way. Similarly, there was no clear difference between the anxiety levels authentic assessment produced versus standardized and classical formative assessment. Not surprisingly however, most students preferred the use of social media platforms, such as Instagram as a form of writing assessment, even though they did not consider it valid. Hopefully, this paper will have positive implications to encourage syllabus designers and material developers to consider students' perceptions and preferences on the assessment process while keeping in mind what their fields of choice expect once they become professionals, as current trends and attitudes on assessment should be more in line with the industry.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ali Khodi

AbstractThe present study attempted to to investigate  factors  which affect EFL writing scores through using generalizability theory (G-theory). To this purpose, one hundred and twenty students participated in one independent and one integrated writing tasks. Proceeding, their performances were scored by six raters: one self-rating,  three peers,-rating and two instructors-rating. The main purpose of the sudy was to determine the relative and absolute contributions of different facets such as student, rater, task, method of scoring, and background of education  to the validity of writing assessment scores. The results indicated three major sources of variance: (a) the student by task by method of scoring (nested in background of education) interaction (STM:B) with 31.8% contribution to the total variance, (b) the student by rater by task by method of scoring (nested in background of education) interaction (SRTM:B) with 26.5% of contribution to the total variance, and (c) the student by rater by method of scoring (nested in background of education) interaction (SRM:B) with 17.6% of the contribution. With regard to the G-coefficients in G-study (relative G-coefficient ≥ 0.86), it was also found that the result of the assessment was highly valid and reliable. The sources of error variance were detected as the student by rater (nested in background of education) (SR:B) and rater by background of education with 99.2% and 0.8% contribution to the error variance, respectively. Additionally, ten separate G-studies were conducted to investigate the contribution of different facets across rater, task, and methods of scoring as differentiation facet. These studies suggested that peer rating, analytical scoring method, and integrated writing tasks were the most reliable and generalizable designs of the writing assessments. Finally, five decision-making studies (D-studies) in optimization level were conducted and it was indicated that at least four raters (with G-coefficient = 0.80) are necessary for a valid and reliable assessment. Based on these results, to achieve the greatest gain in generalizability, teachers should have their students take two writing assessments and their performance should be rated on at least two scoring methods by at least four raters.


Sign in / Sign up

Export Citation Format

Share Document