automated writing evaluation
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 48)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Vol 14 (12) ◽  
pp. 189
Author(s):  
Ameni Benali

It is undeniable that attempts to develop automated feedback systems that support and enhance language learning and assessment have increased in the last few years. The growing demand for using technology in the classroom and the promotions provided by automated- written-feedback program developers and designers, drive many educational institutions to acquire and use these tools for educational purposes (Chen & Cheng, 2008). It remains debatable, however, whether students’ use of these tools leads to improvement in their essay quality or writing outcomes. In this paper I investigate the affordances and shortcomings of automated writing evaluation (AWE) on students’ writing in ESL/EFL contexts. My discussion shows that AWE can improve the quality of writing and learning outcomes if it is integrated with and supported by human feedback. I provide recommendations for further research into improving AWE tools to give more effective and constructive feedback.


SAGE Open ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 215824402110607
Author(s):  
Rui Li

Despite the growing attention being paid to the use of Automated Writing Evaluation (AWE) in China, it is still uncertain what factors lie behind EFL (English-as-a-foreign-language) learners’ continuance intention to use it. To this end, by adding two external factors (i.e., computer self-efficacy and perceived ease of use) to the expectation confirmation model (ECM), we surveyed 345 Chinese EFL learners and tested a number of proposed hypotheses using their response data. Data were analyzed using descriptive statistics and structural equation modeling (SEM). Results demonstrated that four factors directly influenced EFL learners’ continuance intention to use AWE, of which perceived ease of use was the most significant factor. Furthermore, confirmation was the most important factor impacting on EFL learners’ satisfaction and perceived ease of use of AWE. Perceived ease of use of AWE played an important role in influencing EFL learners’ perceived usefulness of AWE. Implications regarding the findings were also discussed.


Author(s):  
Shuai Zhang

Abstract This review generally endeavours to include a brief description of widely used automated writing evaluation systems, an explanation of underlying technologies, working principles and scopes of application, followed by a critical evaluation of the advantages and disadvantages in using these systems in educational contexts. Hopefully, the review would provide implications for language assessment practice and relevant research.


2021 ◽  
Vol 4 (4) ◽  
pp. 665
Author(s):  
Muhamad Nova ◽  
Francisca Titing Koerniawaty

In the professional sector, both spoken and written English communications become essential skills to be mastered by vocational students. However, as a non-native speaker, errors in writing are still faced as an obstacle for the students in applying their English capability. This research aimed at identifying the grammatical errors occurring in vocational students’ writing to overcome the error fossilization. The study was conducted in one vocational school in Denpasar, with a total of 45 students purposively selected. The study used a qualitative approach with a case study design. The data were gained by evaluating the students’ writing through an automated writing evaluation program and the type of writing evaluated in the study was application letter. Then, the data were analyzed through three phases; data reduction, data display, and conclusion drawing. As the result, there were 268 grammatical errors found in the students’ application letters. From their classifications, these errors were categorized into 17 types, with missing an article, misspelling, capitalization, incorrect number agreement, and incorrect preposition as the highest frequency of errors found. Keywords:  Application Letter, Grammatical Errors, Vocational Education 


2021 ◽  
Vol 27 (1) ◽  
pp. 41
Author(s):  
Meilisa Sindy Astika Ariyanto ◽  
Nur Mukminatien ◽  
Sintha Tresnadewi

Automated Writing Evaluation (AWE) programs have emerged as the latest trend in EFL writing classes. AWE programs act as a supplementary to teacher feedback and offer automated suggestions and corrections to students' linguistic errors such as grammar, vocabulary, or mechanics. As there is a need for better recognition of different AWE brands utilized for different levels of students, this research sheds light on identifying six university students’ views of an AWE program, namely ProWritingAid (PWA). The six students are categorized as having high or low writing achievement. This descriptive study delineates the students’ perceptions qualitatively. A semi-structured interview was used to collect the data. The findings suggest the students’ positive views of PWA because it could make class time more effective; it had useful feedback on grammar, vocabulary choices, and mechanics; and it built students‘ self-confidence over their compositions. In addition, for different reasons, the students engaged differently with PWA to enhance their drafts, e.g. using PWA only for the first drafts or for the first and final drafts. Finally, despite of the students’ constructive views on PWA, there was a risk that students only engaged superficially with the program by hitting the correction directly.


Author(s):  
Jianmin Gao

The study made an exploration of the feedback quality of an Automated Writing Evaluation system (AWE) Pigai, which has been widely applied in English teaching and learning in China. The study not only focused on the diagnostic precision of the feedback but also investigated the students’ perceptions of the feedback use in their daily writing practices. Taking 104 university students’ final exam essays as the research materials, the paired sample t-test was conducted to compare the mean number of errors identified by Pigai and professional teachers. It was found that Pigai feedback could not so well diagnose the essays as the human feedback given by the experienced teachers, however, it was quite competent in identifying lexical errors. The analysis of students’ perceptions indicated that most students thought Pigai feedback was multi-functional, but it was inadequate in identifying the collocation errors and giving suggestions in syntactic use. The implications and limitations of the study were discussed at the end of the paper.


Sign in / Sign up

Export Citation Format

Share Document