peer grading
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 15)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Onesun Steve Yoo ◽  
Dongyuan Zhan

A critical issue in operating massive open online courses (MOOCs) is the scalability of providing feedback. Because it is not feasible for instructors to grade a large number of students’ assignments, MOOCs use peer grading systems. Yoo and Zhan investigate the efficacy of that practice when student graders are considered rational economic agents. Using an economic model that characterizes the behavior of student graders, they analyse the accuracy of current peer grading scheme. Interestingly, they identify a systematic grading bias toward the mean, which discourages students from learning. To improve current practice, they propose a simple scale-shift grading scheme, which can simultaneously improve grading accuracy and adjust grading bias. They discuss how it can be readily implemented in practice with moderate involvement of the instructors and MOOCs.


2020 ◽  
Vol 8 (3) ◽  
pp. 1-37
Author(s):  
Ioannis Caragiannis ◽  
George A. Krimpas ◽  
Alexandros A. Voudouris
Keyword(s):  

Author(s):  
Fedor Duzhin ◽  
Amrita Sridhar Narayanan

In an undergraduate programming class taught at Nanyang Technological University, Singapore, students (N=243) were given an opportunity to grade reports submitted by their peers. 10% of all students participated in peer grading and were satisfied with the grade given to them by peers (i.e., this group did not use instructors’ resources). 13% participated in peer grading, updated their reports based on peer feedback, and submitted to a course tutor for final grading. We have shown that even though students who participated in peer grading and updated their reports achieved higher scores, but it happened because they were stronger students to begin with. At the same time, scores of students who participated in peer grading and did not re-submit their reports to an instructor were not lower than average scores. Thus peer grading can be recommended in teaching programming classes as a strategy that reduces instructors’ workload while not jeopardizing students’ learning.


Author(s):  
Alice Gao ◽  
James Wright ◽  
Kevin Leyton-Brown

In many settings, an effective way of evaluating objects of interest is to collect evaluations from dispersed individuals and to aggregate these evaluations together. Some examples are categorizing online content and evaluating student assignments via peer grading. For this data science problem, one challenge is to motivate participants to conduct such evaluations carefully and to report them honestly, particularly when doing so is costly. Existing approaches, notably peer-prediction mechanisms, can incentivize truth telling in equilibrium. However, they also give rise to equilibria in which agents do not pay the costs required to evaluate accurately, and hence fail to elicit useful information. We show that this problem is unavoidable whenever agents are able to coordinate using low-cost signals about the items being evaluated (e.g., text labels or pictures). We then consider ways of circumventing this problem by comparing agents' reports to ground truth, which is available in practice when there exist trusted evaluators---such as teaching assistants in the peer grading scenario---who can perform a limited number of unbiased (but noisy) evaluations. Of course, when such ground truth is available, a simpler approach is also possible: rewarding each agent based on agreement with ground truth with some probability, and unconditionally rewarding the agent otherwise. Surprisingly, we show that the simpler mechanism achieves stronger incentive guarantees given less access to ground truth than a large set of peer-prediction mechanisms.


Sign in / Sign up

Export Citation Format

Share Document