scholarly journals The Impact of Using Automated Writing Feedback in ESL/EFL Classroom Contexts

2021 ◽  
Vol 14 (12) ◽  
pp. 189
Author(s):  
Ameni Benali

It is undeniable that attempts to develop automated feedback systems that support and enhance language learning and assessment have increased in the last few years. The growing demand for using technology in the classroom and the promotions provided by automated- written-feedback program developers and designers, drive many educational institutions to acquire and use these tools for educational purposes (Chen & Cheng, 2008). It remains debatable, however, whether students’ use of these tools leads to improvement in their essay quality or writing outcomes. In this paper I investigate the affordances and shortcomings of automated writing evaluation (AWE) on students’ writing in ESL/EFL contexts. My discussion shows that AWE can improve the quality of writing and learning outcomes if it is integrated with and supported by human feedback. I provide recommendations for further research into improving AWE tools to give more effective and constructive feedback.

2018 ◽  
Vol 11 (8) ◽  
pp. 126
Author(s):  
Beata Lewis Sevcikova

The present research offers an assessment of the online open source tools used in the L2 academic writing, teaching, and learning environment. As fairly little research has been conducted on how to best use online automated proofreaders for educational purposes, the objective of this study is to examine the potential of such online tools. Unlike most studies focusing on Automated Writing Evaluation (AWE), this research concentrates only on the online, open-source writing aide, grammar, spelling and writing style improvement tools available either for free or as paid versions. The accessibility and ability to check language mistakes in academic writings such as college-level essays in real time motivates both, teachers and students. The findings of this empirical-based study indicate that despite some bias, computerized feedback facilitates language learning, assists in improving the quality of writing, and increases student confidence and motivation. The current study can help with the understanding of students’ needs in writing, as well as in their perception of automated feedback.


ReCALL ◽  
2021 ◽  
pp. 1-13
Author(s):  
Aysel Saricaoglu ◽  
Zeynep Bilki

Abstract Automated writing evaluation (AWE) technologies are common supplementary tools for helping students improve their language accuracy using automated feedback. In most existing studies, AWE has been implemented as a class activity or an assignment requirement in English or academic writing classes. The potential of AWE as a voluntary language learning tool is unknown. This study reports on the voluntary use of Criterion by English as a foreign language students in two content courses for two assignments. We investigated (a) to what extent students used Criterion and (b) to what extent their revisions based on automated feedback increased the accuracy of their writing from the first submitted draft to the last in both assignments. We analyzed students’ performance summary reports from Criterion using descriptive statistics and non-parametric statistical tests. The findings showed that not all students used Criterion or resubmitted a revised draft. However, the findings also showed that engagement with automated feedback significantly reduced users’ errors from the first draft to the last in 11 error categories in total for the two assignments.


ReCALL ◽  
2018 ◽  
Vol 31 (2) ◽  
pp. 189-203 ◽  
Author(s):  
Aysel Saricaoglu

AbstractEven though current technologies allow for automated feedback, evaluating content and generating discourse-specific feedback is still a challenge for automated systems, which explains the gap in research investigating the effect of such feedback. This study explores the impact of automated formative feedback on the improvement of English as a second language (ESL) learners’ written causal explanations within two cause-and-effect essays and across pre- and post-tests. Pre- and post-test drafts, feedback reports for first and revised drafts from the automated writing evaluation system, and screen-capturing videos collected from 31 students enrolled in two sections of an advanced-low-level academic writing class were analyzed through descriptive statistics and the Wilcoxon signed-rank test. Findings revealed statistically significant changes in learners’ causal explanations within one cause-and-effect essay while no significant improvement was observed across pre- and post-tests. The findings of this study offer not only insights into how to further improve automated discourse-specific feedback but also pedagogical implications for better learning outcomes.


2021 ◽  
Vol 11 (2) ◽  
pp. 68
Author(s):  
Jian Wang ◽  
Lifang Bai

Computer Assisted Language Learning (CALL) has been a burgeoning industry in China, one case in point being the extensive employment of Automated Writing Evaluation (AWE) systems in college English writing instruction to reduce teachers’ workload. Nonetheless, what warrants a special mention is that most teachers include automatic scores in the formative evaluation of relevant courses with scant attention to the scoring efficacy of these systems (Bai & Wang, 2018; Wang & Zhang, 2020). To have a clearer picture of the scoring validity of two commercially available Chinese AWE systems (Pigai and iWrite), the present study sampled 486 timed CET-4 (College English Test Band-4) essays produced by second-year non-English majors from 8 intact classes. Data comprising the maximum score difference, agreement rate, Pearson’s correlation coefficient and Cohen’s Kappa were collected to showcase human-machine and machine-machine congruence. Quantitative linguistic features of the sample essays, including accuracy, lexical and syntactic complexity, and discourse features, were also gleaned to investigate the differences (or similarities) in construct representation valued by both systems and human raters. Results show that (1) Pigai and iWrite largely agreed with each other but differed a lot from human raters in essay scoring; (2) high-human-score essays were prone to be assigned low machine scores; (3) machines relied heavily on the quantifiable features, which, however, had limited impacts on human raters.


Author(s):  
Corey Palermo ◽  
Margareta Maria Thomson

The majority of United States students demonstrate only partial mastery of the knowledge and skills necessary for proficient writing. Researchers have called for increased classroom-based formative writing assessment to provide students with regular feedback about their writing performance and to support the development of writing skills. Automated writing evaluation (AWE) is a type of assessment for learning (AfL) that combines automated essay scoring (AES) and automated feedback with the goal of supporting improvements in students' writing performance. The current chapter first describes AES, AWE, and automated feedback. Next, results of an original study that examined students' and teachers' perceptions of automated feedback are presented and discussed. The chapter concludes with recommendations and directions for future research.


2017 ◽  
Vol 7 (3) ◽  
pp. 121 ◽  
Author(s):  
Fangyuan Du

This study aims to analyze argument-counterargument structure of English argumentative essays written by Chinese EFL university students, based on the adapted Toulmin’s (2003) model of the argument structure constituting four elements (i.e. claim, data, counterargument and rebuttal). It also measures whether there is a correlation between the use of counterargument structure and the participants’ overall essay quality assessed by an online AWE (Automated Writing Evaluation) program. Three hundred and ninety students with various majors in a Chinese university submitted their argumentative essays in English online. The results demonstrated that half of the participants developed a one-sided model of argumentation while the other half of them used argument-counterargument structure in their essays. The participants’ use of counterarguments affected the overall quality of their essays. Pedagogical implications of these findings are also discussed.


Author(s):  
Justin C. W. Debuse ◽  
Meredith Lawley ◽  
Rania Shibl

<span>Assessment of student learning is a core function of educators. Ideally students should be provided with timely, constructive feedback to facilitate learning. However, provision of high quality feedback becomes more complex as class sizes increase, modes of study expand and academic workloads increase. ICT solutions are being developed to facilitate quality feedback, whilst not impacting adversely upon staff workloads. Hence the research question of this study is 'How do academic staff perceive the usefulness of an automated feedback system in terms of impact on workloads and quality of feedback?' This study used an automated feedback generator (AFG) across multiple tutors and assessment items within an MBA course delivered in a variety of modes. All academics marking in the course completed a survey based on an adaptation of the </span><em>unified theory of acceptance and use of technology</em><span> (UTAUT) model. Results indicated that while the workload impact was generally positive with savings in both cost and time, improvements and modifications to the system could further reduce workloads. Furthermore, results indicated that AFG improves quality in terms of timeliness, greater consistency between markers and an increase in the amount of feedback provided.</span>


Sign in / Sign up

Export Citation Format

Share Document