scholarly journals What Determines Inter-Coder Agreement in Manual Annotations? A Meta-Analytic Investigation

2011 ◽  
Vol 37 (4) ◽  
pp. 699-725 ◽  
Author(s):  
Petra Saskia Bayerl ◽  
Karsten Ingmar Paul

Recent discussions of annotator agreement have mostly centered around its calculation and interpretation, and the correct choice of indices. Although these discussions are important, they only consider the “back-end” of the story, namely, what to do once the data are collected. Just as important in our opinion is to know how agreement is reached in the first place and what factors influence coder agreement as part of the annotation process or setting, as this knowledge can provide concrete guidelines for the planning and set-up of annotation projects. To investigate whether there are factors that consistently impact annotator agreement we conducted a meta-analytic investigation of annotation studies reporting agreement percentages. Our meta-analysis synthesized factors reported in 96 annotation studies from three domains (word-sense disambiguation, prosodic transcriptions, and phonetic transcriptions) and was based on a total of 346 agreement indices. Our analysis identified seven factors that influence reported agreement values: annotation domain, number of categories in a coding scheme, number of annotators in a project, whether annotators received training, the intensity of annotator training, the annotation purpose, and the method used for the calculation of percentage agreements. Based on our results we develop practical recommendations for the assessment, interpretation, calculation, and reporting of coder agreement. We also briefly discuss theoretical implications for the concept of annotation quality.

Author(s):  
David Jurgens ◽  
Roberto Navigli

Annotated data is prerequisite for many NLP applications. Acquiring large-scale annotated corpora is a major bottleneck, requiring significant time and resources. Recent work has proposed turning annotation into a game to increase its appeal and lower its cost; however, current games are largely text-based and closely resemble traditional annotation tasks. We propose a new linguistic annotation paradigm that produces annotations from playing graphical video games. The effectiveness of this design is demonstrated using two video games: one to create a mapping from WordNet senses to images, and a second game that performs Word Sense Disambiguation. Both games produce accurate results. The first game yields annotation quality equal to that of experts and a cost reduction of 73% over equivalent crowdsourcing; the second game provides a 16.3% improvement in accuracy over current state-of-the-art sense disambiguation games with WordNet.


Author(s):  
Manuel Ladron de Guevara ◽  
Christopher George ◽  
Akshat Gupta ◽  
Daragh Byrne ◽  
Ramesh Krishnamurti

2017 ◽  
Vol 132 ◽  
pp. 47-61 ◽  
Author(s):  
Yoan Gutiérrez ◽  
Sonia Vázquez ◽  
Andrés Montoyo

2005 ◽  
Vol 12 (5) ◽  
pp. 554-565 ◽  
Author(s):  
Martijn J. Schuemie ◽  
Jan A. Kors ◽  
Barend Mons

Sign in / Sign up

Export Citation Format

Share Document