Study on the Computer-Assisted English Writing Instruction

2014 ◽  
Vol 1044-1045 ◽  
pp. 1660-1663
Author(s):  
Tian Xu

The computer technology has been widely applied in the English teaching and has become a strong support for the modern English teaching. As for the English writing, multi-media based on the computer technology make the class atmosphere active, arouse the students’ interest and in the meanwhile improve the efficiency of English writing. However, there exist some potential drawbacks in the computer-assisted English writing instruction.

2014 ◽  
Vol 945-949 ◽  
pp. 3546-3549
Author(s):  
Jun Lu ◽  
Chun Xiao Zhao ◽  
Rui Hua Xu

With the boom of modern science and technology, computer and Internet have melted into human’s daily lives; also it has been affecting the traditional teaching modes. By means of computer and multi-media, the traditional English teaching instructions have altered dramatically. This paper mainly introduces what CAE is, the application of CAE in one of approaches to English writing instruction---process writing approach and its function on students’ writing abilities. A quasi-experimental research is designed in the quantitative part of the study including 12 weeks’ intervention. The procedures are pretest, writing instruction, posttest and data analysis. The result from this research indicates that the process approach by means of CAE in writing instruction is an effective approach to improve students’ writing interests and abilities.


RELC Journal ◽  
2021 ◽  
pp. 003368822098022
Author(s):  
Lianjiang Jiang ◽  
Shulin Yu ◽  
Nan Zhou ◽  
Yiqin Xu

While there is no lack of studies on the major approaches to L2 writing instruction (i.e., the product-, process-, and genre-oriented approaches), it remains unclear whether and how these theory-based approaches have been translated into students’ experiences of L2 writing pedagogy. This study examined students’ experiences of L2 writing instructional approaches in the Chinese EFL context. A sample of 1,190 students from 39 Chinese universities participated in the study and they were surveyed about the English writing instruction they received in universities. Results show that the process-oriented approach was most experienced by the students, followed by the genre- and product-oriented approaches. Results of latent profile analyses revealed four distinct profiles of writing pedagogy in students’ experiences: the indistinctive pattern, the product-dominant pattern, the process/genre-dominant pattern, and the synthetic pattern. These patterns indicate that writing instructions in the Chinese university-based English programs have yet to meet the demand for students’ L2 writing development. This study contributes to our knowledge of how L2 writing instructional approaches have been experienced by students of various demographic backgrounds and to how writing curricula and pedagogies can be further improved.


2021 ◽  
Vol 11 (2) ◽  
pp. 68
Author(s):  
Jian Wang ◽  
Lifang Bai

Computer Assisted Language Learning (CALL) has been a burgeoning industry in China, one case in point being the extensive employment of Automated Writing Evaluation (AWE) systems in college English writing instruction to reduce teachers’ workload. Nonetheless, what warrants a special mention is that most teachers include automatic scores in the formative evaluation of relevant courses with scant attention to the scoring efficacy of these systems (Bai & Wang, 2018; Wang & Zhang, 2020). To have a clearer picture of the scoring validity of two commercially available Chinese AWE systems (Pigai and iWrite), the present study sampled 486 timed CET-4 (College English Test Band-4) essays produced by second-year non-English majors from 8 intact classes. Data comprising the maximum score difference, agreement rate, Pearson’s correlation coefficient and Cohen’s Kappa were collected to showcase human-machine and machine-machine congruence. Quantitative linguistic features of the sample essays, including accuracy, lexical and syntactic complexity, and discourse features, were also gleaned to investigate the differences (or similarities) in construct representation valued by both systems and human raters. Results show that (1) Pigai and iWrite largely agreed with each other but differed a lot from human raters in essay scoring; (2) high-human-score essays were prone to be assigned low machine scores; (3) machines relied heavily on the quantifiable features, which, however, had limited impacts on human raters.


Sign in / Sign up

Export Citation Format

Share Document