Elementary Teachers’ Perceptions of Automated Feedback and Automated Scoring: Transforming the Teaching and Learning of Writing Using Automated Writing Evaluation

2021 ◽  
pp. 104208
Author(s):  
Joshua Wilson ◽  
Cristina Ahrendt ◽  
Emily A. Fudge ◽  
Alexandria Raiche ◽  
Gaysha Beard ◽  
...  
2018 ◽  
Vol 11 (8) ◽  
pp. 126
Author(s):  
Beata Lewis Sevcikova

The present research offers an assessment of the online open source tools used in the L2 academic writing, teaching, and learning environment. As fairly little research has been conducted on how to best use online automated proofreaders for educational purposes, the objective of this study is to examine the potential of such online tools. Unlike most studies focusing on Automated Writing Evaluation (AWE), this research concentrates only on the online, open-source writing aide, grammar, spelling and writing style improvement tools available either for free or as paid versions. The accessibility and ability to check language mistakes in academic writings such as college-level essays in real time motivates both, teachers and students. The findings of this empirical-based study indicate that despite some bias, computerized feedback facilitates language learning, assists in improving the quality of writing, and increases student confidence and motivation. The current study can help with the understanding of students’ needs in writing, as well as in their perception of automated feedback.


Author(s):  
Corey Palermo ◽  
Margareta Maria Thomson

The majority of United States students demonstrate only partial mastery of the knowledge and skills necessary for proficient writing. Researchers have called for increased classroom-based formative writing assessment to provide students with regular feedback about their writing performance and to support the development of writing skills. Automated writing evaluation (AWE) is a type of assessment for learning (AfL) that combines automated essay scoring (AES) and automated feedback with the goal of supporting improvements in students' writing performance. The current chapter first describes AES, AWE, and automated feedback. Next, results of an original study that examined students' and teachers' perceptions of automated feedback are presented and discussed. The chapter concludes with recommendations and directions for future research.


Author(s):  
Jianmin Gao

The study made an exploration of the feedback quality of an Automated Writing Evaluation system (AWE) Pigai, which has been widely applied in English teaching and learning in China. The study not only focused on the diagnostic precision of the feedback but also investigated the students’ perceptions of the feedback use in their daily writing practices. Taking 104 university students’ final exam essays as the research materials, the paired sample t-test was conducted to compare the mean number of errors identified by Pigai and professional teachers. It was found that Pigai feedback could not so well diagnose the essays as the human feedback given by the experienced teachers, however, it was quite competent in identifying lexical errors. The analysis of students’ perceptions indicated that most students thought Pigai feedback was multi-functional, but it was inadequate in identifying the collocation errors and giving suggestions in syntactic use. The implications and limitations of the study were discussed at the end of the paper.


Author(s):  
Nilupulee Nathawitharana ◽  
Qing Huang ◽  
Kok-Leong Ong ◽  
Peter Vitartas ◽  
Madhura Jayaratne ◽  
...  

As the use of blended learning environments and digital technologies become integrated into the higher education sector, rich technologies such as analytics have shown promise in facilitating teaching and learning. One popular application of analytics is Automated Writing Evaluation (AWE) systems. Such systems can be used in a formative way; for example, by providing students with feedback on digitally submitted assignments. This paper presents work on the development of an AWE software tool for an Australian university using advanced text analytics techniques. The tool was designed to provide students with timely feedback on their initial assignment drafts, for revision and further improvement. Moreover, it could also assist academics in better understanding students’ assignment performance so as to inform future teaching activities. The paper provides details on the methodology used for development of the software, and presents the results obtained from the analysis of text-based assignments submitted in two subjects. The results are discussed, highlighting how the tool can provide practical value, followed by insights into existing challenges and possible future directions.


ReCALL ◽  
2021 ◽  
pp. 1-13
Author(s):  
Aysel Saricaoglu ◽  
Zeynep Bilki

Abstract Automated writing evaluation (AWE) technologies are common supplementary tools for helping students improve their language accuracy using automated feedback. In most existing studies, AWE has been implemented as a class activity or an assignment requirement in English or academic writing classes. The potential of AWE as a voluntary language learning tool is unknown. This study reports on the voluntary use of Criterion by English as a foreign language students in two content courses for two assignments. We investigated (a) to what extent students used Criterion and (b) to what extent their revisions based on automated feedback increased the accuracy of their writing from the first submitted draft to the last in both assignments. We analyzed students’ performance summary reports from Criterion using descriptive statistics and non-parametric statistical tests. The findings showed that not all students used Criterion or resubmitted a revised draft. However, the findings also showed that engagement with automated feedback significantly reduced users’ errors from the first draft to the last in 11 error categories in total for the two assignments.


2021 ◽  
Vol 14 (12) ◽  
pp. 189
Author(s):  
Ameni Benali

It is undeniable that attempts to develop automated feedback systems that support and enhance language learning and assessment have increased in the last few years. The growing demand for using technology in the classroom and the promotions provided by automated- written-feedback program developers and designers, drive many educational institutions to acquire and use these tools for educational purposes (Chen & Cheng, 2008). It remains debatable, however, whether students’ use of these tools leads to improvement in their essay quality or writing outcomes. In this paper I investigate the affordances and shortcomings of automated writing evaluation (AWE) on students’ writing in ESL/EFL contexts. My discussion shows that AWE can improve the quality of writing and learning outcomes if it is integrated with and supported by human feedback. I provide recommendations for further research into improving AWE tools to give more effective and constructive feedback.


Sign in / Sign up

Export Citation Format

Share Document