writing scores
Recently Published Documents


TOTAL DOCUMENTS

96
(FIVE YEARS 46)

H-INDEX

9
(FIVE YEARS 1)

Author(s):  
Hoeriyah

Writing is one of the four language skills that the learners should master. The 2013 curriculum also says that one of the English language competencies specified in high school level is that students must be able to compose short written texts using coherent text structures and linguistic elements fluently and accurately. In line with that this study aims to find out whether the marking system feedback can improve students' writing skill. This is a two-cycle classroom action research at SMA Negeri 1 Sewon. The subjects were 26 tenth grade students of class X MIPA 1. The data were collected from observation, written documents, writing scores, and questionnaire. The study results showed that by applying the marking system feedback, students' writing skills improved. The mean scores of the students' writing ability in the pre-cycle was 63.65 at the poor category, the first cycle was 73.65 at the fair category, and the second cycle was 81.35 at the good category. In addition, students tend to give positive responses to the implementation of the marking system feedback. As many as 88.46% think that the feedback is useful in writing activities and can help them in correcting their mistakes, 73.08% think that the technique makes them understand the grammar better, and 80.77% say this technique leads them to be more careful in their writing and also motivates them to improve their composition.


2021 ◽  
Vol 44 (4) ◽  
pp. 451-469
Author(s):  
Lili zhang ◽  
Haitao liu

Abstract This exploratory study examines whether genre has an impact on syntactic complexity and holistic rating in EFL writing. Over 300 sample texts produced by intermediate learners were collected from a test and some regular after-class assignments for English writing courses. Each participant completed two writing tasks, one argumentative and the other narrative. Results show that genre type has a significant impact on L2 syntactic complexity. Genre effect is found stronger with timed writing tasks. L2 holistic ratings show correlation with syntactic complexity on the different measure(s) depending on genre type and planning conditions. Regression analyses reveal that for timed writing tasks, clausal density (clauses per sentence) is a reliable predictor for holistic assessment on intermediate EFL learners’ writing quality. It is found to account for 6% of the score variance for timed writing and 10% for timed argumentative writing. Genre is evidenced to be related to EFL writing holistic ratings. Closer examination indicates that while syntactic complexity is predictive of holistic writing scores for argumentative writing, it does not correlate with holistic scores for narrative writing. Other linguistic features rather than syntactic complexity may be accountable. Overall, the study lends support to genre effect in the relationship between syntactic complexity and L2 writing quality holistic rating.


2021 ◽  
Vol 24 (3) ◽  
pp. 166-185
Author(s):  
Pakize Uludag ◽  
Kim McDonough ◽  
Caroline Payant

This study compared English L2 writers’ (N = 111) performance on an integrated writing task from the Canadian Academic English Language (CAEL) Assessment under three prewriting planning conditions: required self-timed planning required fixed time planning, and suggested (i.e., optional) planning. The participants’ integrated essays were scored according to the CAEL writing bands by raters at Paragon Testing Inc. The effect of planning condition on the participants' planning time, writing time, and integrated writing scores were analyzed using MANOVA. The student interviews were analyzed using thematic content analysis. The results indicated that planning time was the only variable impacted by planning condition, with students in the required self-timed planning condition taking more time to plan before beginning to write. Students’ perceptions about prewriting planning are discussed in terms of implications for the teaching and assessment of L2 integrated writing.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Lu Zhang ◽  
Lawrence Jun Zhang

Abstract Studies on academic writing of EFL students have found that they have been less successful in presenting an effective stance. It has been assumed that how they perceive authorial stance may influence their stance deployment. Yet few studies have been conducted to assess student writers’ perceptions of stance. To fill the gap, this research intends to develop and validate an instrument, the Perceptions of Authorial Stance Questionnaire (PASQ), for assessing EFL students’ perceptions of authorial stance and further exploring their relationships with stance deployment and the overall quality of English academic writing. Taking a dialogic perspective, we designed the research with two studies in it. In Study 1, exploratory factor analysis with 197 respondents and subsequent confirmatory factor analysis with another sample of 191 respondents produced results of a 17 item scale with two-factors: dialogic contraction and dialogic expansion. In Study 2, scores for the two subscales of the PASQ were examined in relation to the frequencies of various stance types and writing scores. Results show that scores for the two subscales of perceptions were positively correlated with the frequencies of different stance types. However, no significant relationship was detected between students’ perceptions and their writing scores. Possible reasons of the findings and their pedagogical implications are discussed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ali Khodi

AbstractThe present study attempted to to investigate  factors  which affect EFL writing scores through using generalizability theory (G-theory). To this purpose, one hundred and twenty students participated in one independent and one integrated writing tasks. Proceeding, their performances were scored by six raters: one self-rating,  three peers,-rating and two instructors-rating. The main purpose of the sudy was to determine the relative and absolute contributions of different facets such as student, rater, task, method of scoring, and background of education  to the validity of writing assessment scores. The results indicated three major sources of variance: (a) the student by task by method of scoring (nested in background of education) interaction (STM:B) with 31.8% contribution to the total variance, (b) the student by rater by task by method of scoring (nested in background of education) interaction (SRTM:B) with 26.5% of contribution to the total variance, and (c) the student by rater by method of scoring (nested in background of education) interaction (SRM:B) with 17.6% of the contribution. With regard to the G-coefficients in G-study (relative G-coefficient ≥ 0.86), it was also found that the result of the assessment was highly valid and reliable. The sources of error variance were detected as the student by rater (nested in background of education) (SR:B) and rater by background of education with 99.2% and 0.8% contribution to the error variance, respectively. Additionally, ten separate G-studies were conducted to investigate the contribution of different facets across rater, task, and methods of scoring as differentiation facet. These studies suggested that peer rating, analytical scoring method, and integrated writing tasks were the most reliable and generalizable designs of the writing assessments. Finally, five decision-making studies (D-studies) in optimization level were conducted and it was indicated that at least four raters (with G-coefficient = 0.80) are necessary for a valid and reliable assessment. Based on these results, to achieve the greatest gain in generalizability, teachers should have their students take two writing assessments and their performance should be rated on at least two scoring methods by at least four raters.


Author(s):  
Jingyang Jiang ◽  
Peng Bi ◽  
Nana Xie ◽  
Haitao Liu

Abstract This study explores the extent to which phraseological complexity measures can predict second-language (L2) writing quality of low- and intermediate-level learners, especially in comparison with the more traditional syntactic and lexical complexity measures. To measure phraseological complexity, we evaluated phraseological diversity and phraseological sophistication by focusing on two commonly-used phraseological units (adjective-noun and verb-direct-object combinations). Our findings show that phraseological, syntactic, and lexical complexity measures can respectively explain 36.0, 34.4, and 56.5% of variance in writing scores. Although lexical complexity measures are responsible for more variance in writing scores, all three complexity dimensions contribute to the predictive power of writing scores, as evinced by the combined model explaining 63.4% of variance. Most phraseological complexity measures were loaded onto individual factors by the factor analysis, suggesting that phraseological complexity is an independent dimension of L2 complexity.


Author(s):  
Haeza Haron ◽  
Shaidatul Akma Adi Kasuma ◽  
Ayuni Akhiar

This study uses WhatsApp to facilitate online discussions in developing content idea for ESL writing. A quasi-experimental study was conducted with 33 pre-university students participating in online discussions on WhatsApp for five weeks to support face-to-face instructions. Pre-test, post-test, and five weekly writing tests were administered, examined, and analysed to identify the participants' writing performance. The findings show significant improvement in the experimental group's essay writing scores after using WhatsApp for online discussions. The participants were able to use the content generated from the MWEG in supporting the arguments in their essays. Besides content idea development, the experimental group saw the WhatsApp online discussions as boosting confidence, raising motivation, and encouraging interactions among students. The themes that emerged from the WhatsApp group interaction were lively discussions, sharing of opinions, and turn-taking. While the control group also improved in their test scores, it was not as significant as the experimental group.


Author(s):  
Pusfika Rayuningtya ◽  
Ika Fitriani

Motivated by the growth of social media throughout the globe, including in Indonesia, educational practitioners need to be creative and make use of this opportunity to boost up the learning goals, for example making use of Facebook, Twitter, Instagram, Line, and many others (social media) in educational settings. Among those social media, Instagram has increased its popularity, particularly in Indonesia, with its 22 million users. It is an online platform in which users can share their stories via uploaded photos. Recently, it is not merely used as photo story sharing but also online shopping, news updating, and video conferencing. As Instagram offers promising features, this study explored how this platform was applied to improve the students English written competence, focusing on reading and writing. This study is action research that investigates the use of Instagram as a social-and-educational medium that offers beyond new language learning experiences in the project called InstaGlish, Instagram English. The data were collected from the classroom observation during the project, students' Instagram photo posts, captions and comments, and students' reading and writing scores after project implementation.  A questionnaire and direct interview to the students were also carried out to give a more thorough and deeper understanding of the students' responses toward how effective InstaGlish helps them learn and induce their English. In addition, the findings of this current study were expected to give fruitful insight on how to use social media not merely as the fun-without-meaning activity yet fun-and-meaningful new learning experiences.


2021 ◽  
Vol 6 (1) ◽  
pp. 11-17
Author(s):  
Moh Hafidz

The Graphic organizer strategy is visually mapped to organize the general into a particular idea in an argumentative paragraph to develop students’ writing scores. The purpose of this research is to enrich theoretical and practical strategiesin writing paragraph argumentatively. This research is a quasi-experimental study with a Pretest-post-test non-equivalent group and non-random sampling technique to determine the experimental group consisting of 23 participants and the control group consisting of 23 participants. The expert who validated the test was senior English lecturers. And the researcher used Cronbach’s Alpha to measure the reliability, and the result was 0.721 (acceptable). As the result, graphic organizer strategy significantly affected the students’ writing achievement in argumentative paragraphs, especially on the organization aspect. Additionally, this strategy also allowed them to explore their ideas independently and unconsciously builds some connecting words up.DOI 10.26905/enjourme.v6i1.5701


2021 ◽  
Author(s):  
Brent Bridgeman

Graduate school programs that are considering dropping the GRE as an admissions tool often focus on claims that the test is biased and does not predict valued outcomes. This paper addresses the bias issue and provides evidence related to the prediction of valued outcomes. Two studies are included. The first study uses data from chemistry and computer engineering programs from a flagship state university and an Ivy League university to demonstrate the ability of the GRE to predict dropout. The second study shows the relationship of GRE Analytical Writing scores to writing produced as part of graduate school coursework. In both studies results that are both practically and statistically significant are presented.


Sign in / Sign up

Export Citation Format

Share Document