Students’ experiences of fairness in summative assessment: A study in a higher education context

2022 ◽  
Vol 72 ◽  
pp. 101118
Author(s):  
Ali Darabi Bazvand ◽  
Amirhossein Rasooli
2019 ◽  
Vol 16 (4) ◽  
pp. 381-391
Author(s):  
Stavros A. Nikou ◽  
Anastasios A. Economides

Purpose The purpose of this study is to compare the overall usability and user experience of desktop computers and mobile-devices when used in a summative assessment in the context of a higher education course. Design/methodology/approach The study follows a between-groups design. The participants were 110 first-year undergraduate students from a European university. Students in the experimental group participated in the assessment using mobile devices, whereas students in the control group participated using desktop computers. After the assessment, students self-reported their experiences with computer-based assessment (CBA) and mobile-based assessment (MBA), respectively. The instruments used were the user experience questionnaire and the system usability scale. Findings Attractiveness and novelty were reported significantly higher in the experimental group (MBA), while no significant differences were found between the two groups in terms of efficiency, perspicuity, dependability and stimulation. The overall score for the system usability was not found to differ between the two conditions. Practical implications The usability and user experience issues discussed in this study can inform educators and policymakers about the potential of using mobile devices in online assessment practices, as an alternative to desktop computers. Originality/value The study is novel, in that it provides quantitative evidence for the usability and user experience of both desktop computers and mobile devices when used in a summative assessment in the context of a higher education course. Study findings can contribute towards the interchangeable usage of desktop computers and mobile devices in assessment practices in higher education.


Author(s):  
Mark S. Davies ◽  
Maddalena Taras

Assessment literacies are finding leverage, but there is little exploration of links between theory, practice and perceived understandings in higher education (HE). This article builds on and consolidates research that has taken place over ten years that evaluates assessment literacies among HE lecturers in education and science, and in staff developers, by presenting a comparative view of the data. The results indicate that there was generally a good understanding of theoretical and practical aspects of summative assessment across all groups. However, understandings of formative assessment showed little concordance between and within the groups, particularly among staff developers, but this group was better at clarifying the necessary link between formative assessment and feedback. Although education lecturers had a firmer grasp of central terminologies, in general there are still deficits in understanding about how these terms interrelate. Staff developers' relative weakness of understanding in some areas is of concern since this group shapes those who teach. These issues are exacerbated by a lack of acknowledgement that they exist, which may seriously hamper the development of both staff and students in clarifying processes they encounter daily. Basic shared understandings are required that can translate into personal, coherent assessment literacies. As a community we need to take on this task, because if we do not, as individuals, or individual groups, we will continue to have fragmented assessment literacies.


2014 ◽  
Vol 13 (1) ◽  
pp. 83-89 ◽  
Author(s):  
Emily A. Holt ◽  
Britt Fagerheim ◽  
Susan Durham

Online plagiarism tutorials are increasingly popular in higher education, as faculty and staff try to curb the plagiarism epidemic. Yet no research has validated the efficacy of such tools in minimizing plagiarism in the sciences. Our study compared three plagiarism-avoidance training regimens (i.e., no training, online tutorial, or homework assignment) and their impacts on students’ ability to accurately discriminate plagiarism from text that is properly quoted, paraphrased, and attributed. Using pre- and postsurveys of 173 undergraduate students in three general ecology courses, we found that students given the homework assignment had far greater success in identifying plagiarism or the lack thereof compared with students given no training. In general, students trained with the homework assignment more successfully identified plagiarism than did students trained with the online tutorial. We also found that the summative assessment associated with the plagiarism-avoidance training formats (i.e., homework grade and online tutorial assessment score) did not correlate with student improvement on surveys through time.


Author(s):  
Joyce W. Gikandi

As online and blended learning increasingly become common, higher educators and researchers need to rethink fundamental issues of teaching, learning, and assessment in these non-traditional spaces. Fundamental issues of assessment and in particular reliability have not been well understood despite proliferation of e-learning in higher education. The chapter begins with a justification of the need to reconceptualize assessment and associated fundamental issues in e-learning settings. The author further articulates the distinction between reliability within the context of assessment for learning (formative assessment) and assessment of learning (summative assessment). The core characteristics of reliability are critically examined and exemplified using research insights in relation to how achievement of these characteristics enhances reliability and by implication validity of assessment. The identified characteristics include opportunities for explicit learning goals and shared meaning, documentation, and monitoring evidence of learning; and multiple sources of evidence of learning and multi-dimensional perspectives. Finally, conclusions and recommendations are offered.


Author(s):  
Maria Assumpció Rafart Serra ◽  
Andrea Bikfalvi ◽  
Josep Soler Masó ◽  
Jordi Poch Garcia

A myriad of higher education degrees including science, engineering, business and economics could integrate activities using spreadsheet into the teaching-learning process. In the present article, we aim to introduce and describe a new tool that generates, individually assigns, corrects, provides feed-back and grades exercises and activities using spreadsheets all completely automatically. The system’s versatility means it can be applied in a range of areas of knowledge and subjects including mathematics, statistics, physics, accounting and economics, among others. It is usable for both formative and summative assessment based on students’ answers, providing them with the corresponding feed-back and/or the resultant grades. The proposed solution has been developed and implemented at the University of Girona (Spain) and forms part a wider family of tools that aim to contribute to the continuous improvement of higher education.


Author(s):  
Alexander J. Aidan

This chapter focuses on the second iteration of a longitudinal action research project that culminates in students acting as partners in the assessment process by co-creating their summative assessment marking criteria. The research argues that the co-ownership of marking criteria increases assessment literacy and helps students to understand how their assignment will be marked. The research limitations are presented, and future research pathways are defined. The findings are analyzed from a social-constructivist and critical pedagogy window. The research concludes that actively engaging students in their assessment leads to an enhanced learning experience and creates space for a more democratic assessment in the classroom.


Sign in / Sign up

Export Citation Format

Share Document