scoring rubrics
Recently Published Documents


TOTAL DOCUMENTS

81
(FIVE YEARS 27)

H-INDEX

9
(FIVE YEARS 0)

Languages ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 204
Author(s):  
Liliann Byman Frisén ◽  
Pia Sundqvist ◽  
Erica Sandlund

Assessment of foreign/second language (L2) oral proficiency is known to be complex and influenced by the local context. In Sweden, extensive assessment guidelines for the National English Speaking Test (NEST) are offered to teachers, who act as raters of their own students’ performances on this high-stakes L2 English oral proficiency (OP) test. Despite guidelines, teachers commonly construct their own NEST scoring rubric. The present study aims to unveil teachers-as-raters’ conceptualizations, as these emerge from the self-made scoring rubrics, and possible transformations of policy. Data consist of 20 teacher-generated scoring rubrics used for assessing NEST (years 6 and 9). Rubrics were collected via personal networks and online teacher membership groups. Employing content analysis, data were analysed qualitatively to examine (i) what OP sub-skills were in focus for assessment, (ii) how sub-skills were conceptualized, and (iii) scoring rubric design. Results showed that the content and design of rubrics were heavily influenced by the official assessment guidelines, which led to broad consensus about what to assess—but not about how to assess. Lack of consensus was particularly salient for interactive skills. Analysis of policy transformations revealed that teachers’ self-made templates, in fact, lead to an analytic rather than a holistic assessment practice.


Author(s):  
Emily Ee Ching Choong ◽  
Pravina Manoharan ◽  
Souba Rethinasamy

Amid a global pandemic, while schools in many parts of the world were closed to adhere to quarantine orders, schools in Japan resumed face-to-face classes after only a month of closure with strict adherence to COVID-19 guidelines and standard operating procedures (SOP). This study examined how speaking assessments were administered face-to-face for Grade 5 and 6 elementary school students prior to and after introducing the Common European Framework of Reference (CEFR) and amid a global pandemic between April to October 2020. The paper also reports the challenges and strategies employed in carrying out the speaking assessments following the CEFR while adhering to the SOP. The study employed a qualitative research method that utilised semi-structured interviews to elicit information from four teachers who taught in eight schools within Niigata City, Japan. Findings suggest that prior to the implementation of CEFR, not all teachers carried out speaking assessments. However, the implementation of CEFR emphasised the need to teach speaking and carry out speaking assessments. The CEFR also served as guidance for the teachers in preparing the assessment scoring rubrics. The results also showed that the speaking assessments were implemented individually instead of in groups before the pandemic and the presence of the masks, which increased the student’s anxiety and affected their performance. However, the teachers employed various strategies to overcome the challenges by modifying the assessment tasks and utilising web conferencing technology.


2021 ◽  
Vol 4 (4) ◽  
pp. p16
Author(s):  
Hayat Rasheed Alamri ◽  
Rania Daifullah Adawi

This mixed-method study explored the perspectives of Saudi EFL teachers concerning the use of Writing Scoring Rubrics (WSRs) to correct students' written work and instruct EFL writing classes. The study sample included 106 Saudi EFL teachers, who answered the twenty-one close-ended questions and the first open-ended question, with twenty-five answering the second open-ended question. The findings reveal that the teachers frequently employed in-class correction and feedback to correct their students' written work, with nearly one-third used assessment techniques that included WSRs, self-assessment, peer editing, journals, and portfolios. The results of the second question indicate that Saudi EFL teachers generally engage students in creating customized WSRs. The findings also revealed that Saudi EFL teachers consider WSRs beneficial to both students and teachers and might be viewed by some experienced EFL teachers as a practical correction or assessment method that improves students' writing. Therefore, this study contributes to a growing body of literature highlighting the importance of WSRs in teaching and assessing writing skills.


2021 ◽  
Vol 14 (12) ◽  
pp. 183
Author(s):  
Patteera Thienpermpool

Assessment has shifted from assessment of learning to assessment for learning. Self-assessment and peer assessment therefore appear to play more important roles as they encourage students to critically reflect on their own and their peers’ learning progress and performance. Although self-assessment and peer assessment of written language performance have been widely explored, assessment of spoken language, especially in presentation skills, is under-explored. Additionally, students’ peer assessments are found to be different from teachers’ assessments (De Grez, Valcke, & Roozen, 2012), with this possibly due to the lack of training. This study aimed to investigate whether in-service teacher participants, with experience in marking students’ performance, would be able to undertake self-assessment and peer assessment effectively in comparison to the teacher’s assessment. The study also intended to explore participants’ perceptions of self-assessment and peer assessment of English presentation skills. The participants were 14 in-service teachers teaching their native language at different levels, ranging from primary to tertiary, who were also studying English as a foreign language. The research instruments were scoring rubrics and an online questionnaire. The data were analysed by Pearson’s correlation coefficients, means and standard deviations. The results revealed that in-service teachers could perform better in peer assessment. The study’s discussion provides fruitful implications for language assessment. 


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2933
Author(s):  
Dong-Joong Kim ◽  
Sang-Ho Choi ◽  
Younhee Lee ◽  
Woong Lim

The purpose of this study is to investigate secondary teacher candidates’ experience of mathematical modeling task design. In the study, 54 teacher candidates in a university-based teacher education program created modeling tasks and scoring rubrics. Next, the participants pilot-tested the tasks with students and had the opportunity to revise the original tasks and rubrics based on student responses. The data included participants’ statements, in which they described and reflected on the design and revision process of modeling tasks. The study describes six didactic revision strategies in revising modeling tasks and identifies five emerging pedagogical ideas from revising tasks and rubrics. The study also discusses the way modeling task design activities have the potential to support teacher candidates’ learning through a bottom-up modeling curriculum in teacher education.


2021 ◽  
Vol 9 (1) ◽  
pp. 77
Author(s):  
Abeer Al-Ghazo ◽  
Issam Ta'amneh

The paper tries to investigate the most preferable writing scoring rubrics when assessing students' writing assignments and to find the dimensions that teachers who teach English as a foreign language (EFL) emphasize when scoring EFL writing summaries. Thirty male and female Jordanian EFL teachers who teach English in both basic and secondary schools were participated to collect the necessary data. To conduct the study, a questionnaire consisting of twenty-seven items was prepared and disturbed by the researchers to suit the purpose of the study. In order to analyze the participants' respondents in the questionnaire, the researchers calculate Percentages, Means, Standard Deviations. The results revealed that there is a high interest in using analytic scoring rubrics to correct their students’ writing. The total mean reached 3.27 with standard deviation (0.65) by high agreement degree. Moreover, the results also highlight the importance of using scoring rubrics as precise and effective   methods to assess the learners’ writing performance.


2021 ◽  
Author(s):  
Matt Sievers ◽  
Connor Reemts ◽  
Katie Dickinson ◽  
Joya Mukerji ◽  
Ismael Barreras Beltran ◽  
...  

Evolution by natural selection is recognized as both the most important concept in undergraduate biology and the most difficult to teach. Unfortunately, teaching and assessment of evolution have been impaired by legacy approaches that focus on Darwin's original insights and the Modern Synthesis' integration of Mendelian genetics, but ignore or downplay advances from what we term the Molecular Synthesis. To create better alignment between instructional approaches and contemporary research in the biosciences, we propose that the primary learning goal in teaching evolution should be for students to connect genotypes, phenotypes, and fitness. To support this approach, we developed and tested assessment questions and scoring rubrics called the Extended Assessing Conceptual Reasoning of Natural Selection (E-ACORNS) instrument. Initial E-ACORNS data suggest that after traditional instruction, few students recognize the molecular synthesis, prompting us to propose that introductory course sequences be re-organized with the molecular synthesis as their central theme.


Author(s):  
Jaime Jordan ◽  
Laura R. Hopson ◽  
Caroline Molins ◽  
Suzanne K. Bentley ◽  
Nicole M. Deiorio ◽  
...  

2021 ◽  
Vol 1933 (1) ◽  
pp. 012081
Author(s):  
Lussy Dwiutami Wahyuni ◽  
Gumgum Gumela ◽  
Herdiyan Maulana

2021 ◽  
Vol 29 (2) ◽  
Author(s):  
Wee Sian Wong ◽  
Chih How Bong

Automated Essay Scoring (AES) refers to the Artificial Intelligence (AI) application with the “intelligence” in assessing and scoring essays. There are several well-known commercial AES adopted by western countries, as well as many research works conducted in investigating automated essay scoring. However, most of the products and research works are not related to the Malaysian English test context. The AES products tend to score essays based on the scoring rubrics of a particular English text context (e.g., TOEFL, GMAT) by employing their proprietary scoring algorithm that is not accessible by the users. In Malaysia, the research and development of AES are scarce. This paper intends to formulate a Malaysia-based AES, namely Intelligent Essay Grader (IEG), for the Malaysian English test environment by using our collection of two Malaysian University English Test (MUET) essay dataset. We proposed the essay scoring rubric based on its language and semantic features. We analyzed the correlation of the proposed language and semantic features with the essay grade using the Pearson Correlation Coefficient. Furthermore, we constructed an essay scoring model to predict the essay grades. In our result, we found that the language featured such as vocabulary count and advanced part of speech were highly correlated with the essay grades, and the language features showed a greater influence on essay grades than the semantic features. From our prediction model, we observed that the model yielded better accuracy results based on the selected high-correlated essay features, followed by the language features.


Sign in / Sign up

Export Citation Format

Share Document