writing assessments
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 21)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ali Khodi

AbstractThe present study attempted to to investigate  factors  which affect EFL writing scores through using generalizability theory (G-theory). To this purpose, one hundred and twenty students participated in one independent and one integrated writing tasks. Proceeding, their performances were scored by six raters: one self-rating,  three peers,-rating and two instructors-rating. The main purpose of the sudy was to determine the relative and absolute contributions of different facets such as student, rater, task, method of scoring, and background of education  to the validity of writing assessment scores. The results indicated three major sources of variance: (a) the student by task by method of scoring (nested in background of education) interaction (STM:B) with 31.8% contribution to the total variance, (b) the student by rater by task by method of scoring (nested in background of education) interaction (SRTM:B) with 26.5% of contribution to the total variance, and (c) the student by rater by method of scoring (nested in background of education) interaction (SRM:B) with 17.6% of the contribution. With regard to the G-coefficients in G-study (relative G-coefficient ≥ 0.86), it was also found that the result of the assessment was highly valid and reliable. The sources of error variance were detected as the student by rater (nested in background of education) (SR:B) and rater by background of education with 99.2% and 0.8% contribution to the error variance, respectively. Additionally, ten separate G-studies were conducted to investigate the contribution of different facets across rater, task, and methods of scoring as differentiation facet. These studies suggested that peer rating, analytical scoring method, and integrated writing tasks were the most reliable and generalizable designs of the writing assessments. Finally, five decision-making studies (D-studies) in optimization level were conducted and it was indicated that at least four raters (with G-coefficient = 0.80) are necessary for a valid and reliable assessment. Based on these results, to achieve the greatest gain in generalizability, teachers should have their students take two writing assessments and their performance should be rated on at least two scoring methods by at least four raters.


2021 ◽  
Vol 20 (3) ◽  
pp. ar33
Author(s):  
Juli D. Uhl ◽  
Kamali N. Sripathi ◽  
Eli Meir ◽  
John Merrill ◽  
Mark Urban-Lurain ◽  
...  

This study measures student learning with a computer-automated tool by categorizing ideas in student writing about cellular respiration after an interactive computer-based tutorial. Students from multiple institution types exhibited increased scientific thinking post-tutorial, and the tool captured students’ mixed ideas.


SAGE Open ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 215824402110094
Author(s):  
Yu Zhu ◽  
Andy Shui-Lung Fung ◽  
Liuyan Yang

Personality is an inherent rater’s characteristic influencing rating severity, but very few studies examined their relationship and the findings were inconclusive. This study aimed to re-investigate the relationship between raters’ personality and rating severity with more control on relevant variables and more reliable analysis of rating severity. Female novice raters ( n = 28) from a demographically homogeneous background were recruited to rate on two occasions essays written by 111 students in an intermediate-level Chinese as a foreign language program. Raters’ personality traits were measured using the complete version of NEO-PI-R. Many-faceted Rasch measurement model and repeated measurement were applied to yield more robust estimates of rating severity. In addition, rating order effect was carefully controlled. Extroversion was found to be positively correlated with severity, r(26) = .495, p = .010. Furthermore, Extroversion was found to be a valid predictor of severity, t(24) = 2.792, p = .010, R2 = .21, Cohen’s d = .77, Hattie’s r = .37. Practical implications for developing more individualized online rater calibration for large-scale writing assessments were discussed, followed by limitations of the present study.


Author(s):  
May Abdul Ghaffar

Many L2 learners show low motivation when it comes to writing. The intervention of this study focuses on the idea of engaging L2 students and their teacher in co-constructing writing rubrics to help them develop a better understanding and awareness of the writing criteria in order to enhance autonomy and collaboration and gain ownership and responsibility for developing their writing skills. This study aimed to investigate the impact of co-constructed rubrics on L2 learners’ writing skills and their perceptions towards writing; it also examined to what extent co-constructed rubrics can be used as a learning and assessment tool to help teachers generate feedback conducive to learning and competency development in writing.This mixed methods study integrates both quantitative and qualitative data collection tools and analysis, including pre and post writing assessments for intervention and comparison groups, classroom observations, pre and post interviews with the teacher and L2 students, and a pre and post questionnaire. Results revealed that the intervention class’s mean average increased significantly in the post writing assessment, while the comparison class’s mean average decreased but with no statistical significance. Moreover, the survey showed that co-constructing the rubrics with the intervention students enhanced their attitudes towards writing. Class observations noted positive changes in the class dynamics and an improvement in the levels of students’ interaction and engagement. Co-constructing rubrics has emphasized the fact that writing is a skill that can be taught effectively and can be a solution for those who claim that ‘writing is a universal problem’.


2020 ◽  
pp. 014473942096144
Author(s):  
Andrew Judge

Policy writing assessments are increasingly used as an alternative or supplementary method of assessment within the teaching of politics and policy. Such assessments, often referred to as ‘policy briefs’ or ‘briefing memos’, are often used to develop writing skills and to encourage active learning of policy-related topics among students. While they can be readily adapted to different teaching and learning contexts, it can be challenging to make appropriate design choices to implement policy writing assessments so that are able to meet the learning aims of students. This article sets out a heuristic framework, derived from the existing literature on policy writing assessments to help clarify these choices. It advocates for viewing assessment design as embedded within course design and emphasises the pedagogical and contextual aspects of assessment design. To illustrate how this heuristic framework can help those involved in course design, this article concludes with a reconstruction of the design process for a policy writing assessment in an undergraduate course on Global Energy Politics.


2020 ◽  
pp. 016264342094560
Author(s):  
Amber Rowland ◽  
Sean J. Smith ◽  
K. Alisa Lowrey

Individuals with disabilities continue to struggle with writing. Most students with disabilities do not measure at even the most basic level in writing assessments. Technology offers tools to support writing instruction, but many teachers acknowledge a lack of confidence in designing instruction using these tools in writing. Using the 6 Traits of Writing model as a framework, this article describes how students with disabilities may be challenged in each trait and provides technology that can support the attainment of skills within that identified trait area.


2020 ◽  
Vol 45 ◽  
pp. 100470
Author(s):  
Thomas Canz ◽  
Lars Hoffmann ◽  
Renate Kania

Sign in / Sign up

Export Citation Format

Share Document