intercoder reliability
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 21)

H-INDEX

19
(FIVE YEARS 2)

Author(s):  
Gabriela Tarouco ◽  
Rafael Madeira ◽  
Soraia Vieira

Abstract: In this paper we compare the recent dataset of Latin American country party manifestos as coded by the Manifest Project database to other estimations of position on the left-right scale and to another coding of the same documents, discussing their limits and potentialities. The differences found between results offer an interesting opportunity to discuss the method, its reliability, and the validity of the coding scheme and the scales. Our findings suggest that the fragile reliability of the hand-coded content analysis could be circumvented by employing intercoder reliability tests and that users must be cautious when basing conclusions on this project’s results.


2021 ◽  
pp. bmjqs-2021-013672
Author(s):  
Sigall K Bell ◽  
Fabienne Bourgeois ◽  
Catherine M DesRoches ◽  
Joe Dong ◽  
Kendall Harcourt ◽  
...  

BackgroundPatients and families are important contributors to the diagnostic team, but their perspectives are not reflected in current diagnostic measures. Patients/families can identify some breakdowns in the diagnostic process beyond the clinician’s view. We aimed to develop a framework with patients/families to help organisations identify and categorise patient-reported diagnostic process-related breakdowns (PRDBs) to inform organisational learning.MethodA multi-stakeholder advisory group including patients, families, clinicians, and experts in diagnostic error, patient engagement and safety, and user-centred design, co-developed a framework for PRDBs in ambulatory care. We tested the framework using standard qualitative analysis methods with two physicians and one patient coder, analysing 2165 patient-reported ambulatory errors in two large surveys representing 25 425 US respondents. We tested intercoder reliability of breakdown categorisation using the Gwet’s AC1 and Cohen’s kappa statistic. We considered agreement coefficients 0.61–0.8=good agreement and 0.81–1.00=excellent agreement.ResultsThe framework describes 7 patient-reported breakdown categories (with 40 subcategories), 19 patient-identified contributing factors and 11 potential patient-reported impacts. Patients identified breakdowns in each step of the diagnostic process, including missing or inaccurate main concerns and symptoms; missing/outdated test results; and communication breakdowns such as not feeling heard or misalignment between patient and provider about symptoms, events, or their significance. The frequency of PRDBs was 6.4% in one dataset and 6.9% in the other. Intercoder reliability showed good-to-excellent reliability in each dataset: AC1 0.89 (95% CI 0.89 to 0.90) to 0.96 (95% CI 0.95 to 0.97); kappa 0.64 (95% CI 0.62, to 0.66) to 0.85 (95% CI 0.83 to 0.88).ConclusionsThe PRDB framework, developed in partnership with patients/families, can help organisations identify and reliably categorise PRDBs, including some that are invisible to clinicians; guide interventions to engage patients and families as diagnostic partners; and inform whole organisational learning.


2021 ◽  
Vol 99 (Supplement_3) ◽  
pp. 147-148
Author(s):  
Elizabeth M Brownawell ◽  
Elizabeth A Hines ◽  
Linda Falcone ◽  
Chris Gambino

Abstract The group mental model of swine-related biosecurity for producers and experts was assessed and compared using network analysis. The proper implementation of biosecurity plans reduces the risk of biological hazards that could cripple the industry. Recently collected survey data show producer motivation to adopt a biosecurity protocol is not driven solely by the value of the operation (Hines and Falcone, unpublished). Other motivating factors exist for how producers perceive risk relating to biosecurity management. To identify how pig producers and experts conceptualize biosecurity, open-ended survey questions were asked. Survey responses (n = 123) were coded using a newly developed codebook. Intercoder reliability was established using Krippendorff’s a. Code co-occurrence was used to build a network diagram showing producer and expert mental models, or depiction of the interdependent relationships among values, beliefs, behavior, and cognitive processes of decision making. Analyses of code co-occurrence revealed differences between producers and experts. The results suggest PA-based producers think of biosecurity relating to the protection of their property (ie. inward protection) which was closely associated with limiting access of “outsiders.” Also, the mental model diagram suggests producers think about biosecurity more broadly due to less clustering of ideas. Whereas experts think about biosecurity more specifically relating to two to three themes. Specifically, the expert biosecurity diagram revealed record keeping as an important component of biosecurity, which was strongly related to how experts think about cleanliness and limiting outsider access. Regarding strategies to address biohazard risks, both producers and experts recognize several options. However, experts proved to have stronger connections between concepts. Specifically, the diagrams revealed experts see all strategies as connected. From an expert perspective, strategies to address biohazard risks should be implemented simultaneously. These findings are the first step to designing communication to bridge the gaps between expert and producer understanding of biosecurity.


Healthcare ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1094
Author(s):  
Jia Luo ◽  
Rui Xue ◽  
Jinglu Hu ◽  
Didier El Baz

Misinformation posted on social media during COVID-19 is one main example of infodemic data. This phenomenon was prominent in China when COVID-19 happened at the beginning. While a lot of data can be collected from various social media platforms, publicly available infodemic detection data remains rare and is not easy to construct manually. Therefore, instead of developing techniques for infodemic detection, this paper aims at constructing a Chinese infodemic dataset, “infodemic 2019”, by collecting widely spread Chinese infodemic during the COVID-19 outbreak. Each record is labeled as true, false or questionable. After a four-time adjustment, the original imbalanced dataset is converted into a balanced dataset by exploring the properties of the collected records. The final labels achieve high intercoder reliability with healthcare workers’ annotations and the high-frequency words show a strong relationship between the proposed dataset and pandemic diseases. Finally, numerical experiments are carried out with RNN, CNN and fastText. All of them achieve reasonable performance and present baselines for future works.


Author(s):  
Mirko Schürmann ◽  
Anja Panse ◽  
Zain Shaikh ◽  
Rolf Biehler ◽  
Niclas Schaper ◽  
...  

AbstractMathematics Learning Support Centres are becoming more and more common in higher education both internationally and in Germany. Whereas it is clear that their quality largely depends on a functioning interaction in consultations, little is known about how such consultations proceed in detail. On the basis of models from the literature and recorded support sessions (N = 36), we constructed a process model that divides consultations into four ideal–typical phases. In the individual consultations, forward or backward leaps occur, but overall the model seems to describe the data well. A high intercoder reliability shows that it can be applied consistently on real data by different researchers. An analysis of the consultations between students and tutors shows that both mainly work on past attempts or thoughts of the students to solve the exercise or problems and on concrete strategies to solve a problem within the session. In contrast, very little time is dedicated to summarizing and reflecting the solution. The data allows for a more in-depth discussion of what constitutes quality in advising processes and how it might be further explored. Practically, the model may structure support sessions and help in focussing on different goals in different phases.


Author(s):  
Franziska Oehmer

The variable provides information on whether the nationality of the (alleged) victims and/or perpetrator is mentioned in connection with crimes and offences. Research shows that minorities are disproportionately more often depicted as perpetrators than as victims (Hestermann, 2010; Vinson & Ertter, 2002).   Field of application/theoretical foundation: The variable “nationality of the (alleged) victim or perpetrator” is of particular relevance in the context of debates on media ethics and legal philosophy. It is mainly used in the field of media effects research (stereotype and cultivation research, see Arendt, 2010).   Example study: Hestermann (2010)   Info about variable Variable name/definition: nationality [Nationalität] Level of analysis: mentioned (alleged) victim and perpetrator in the report Values: Nationality of the victim & perpetrator Nicht genannt Deutsch Ausländisch Ausdrücklich unbekannt Trifft nicht zu Intercoder reliability: Nationality of the victim 0.94; Nationality of the perpetrator 0.98 (2 Coder). What exact coefficient has been calculated has not been reported. Codebook: available at https://www.jstor.org/stable/j.ctv941tf9.12   References Arendt, F. (2010). Cultivation effects of a newspaper on reality estimates, explicit and implicit attitudes. Journal of Media Psychology, 22, 147–159. Hestermann, T. (2010). Fernsehgewalt und die Einschaltquote: Welches Publikumsbild Fernsehschaffende leitet, wenn sie über Gewaltkriminalität berichten. Baden-Baden: Nomos Verlagsgesellschaft mbH. [Television violence and ratings: Which picture of the audience leads television makers when they report on violent crime]. Vinson, C. D., & Ertter, J. S. (2002). Entertainment or Education: How Do Media Cover the Courts? Harvard International Journal of Press/Politics, 7(4), S. 80–97.


Author(s):  
Carina Nina Vorisek ◽  
Sophie Anne Ines Klopfenstein ◽  
Julian Sass ◽  
Moritz Lehne ◽  
Carsten Oliver Schmidt ◽  
...  

Studies investigating the suitability of SNOMED CT in COVID-19 datasets are still scarce. The purpose of this study was to evaluate the suitability of SNOMED CT for structured searches of COVID-19 studies, using the German Corona Consensus Dataset (GECCO) as example. Suitability of the international standard SNOMED CT was measured with the scoring system ISO/TS 21564, and intercoder reliability of two independent mapping specialists was evaluated. The resulting analysis showed that the majority of data items had either a complete or partial equivalent in SNOMED CT (complete equivalent: 141 items; partial equivalent: 63 items; no equivalent: 1 item). Intercoder reliability was moderate, possibly due to non-establishment of mapping rules and high percentage (74%) of different but similar concepts among the 86 non-equal chosen concepts. The study shows that SNOMED CT can be utilized for COVID-19 cohort browsing. However, further studies investigating mapping rules and further international terminologies are necessary.


2021 ◽  
pp. 004912412098618
Author(s):  
Victoria Reyes ◽  
Elizabeth Bogumil ◽  
Levin Elias Welch

Transparency is once again a central issue of debate across types of qualitative research. Work on how to conduct qualitative data analysis, on the other hand, walks us through the step-by-step process on how to code and understand the data we’ve collected. Although there are a few exceptions, less focus is on transparency regarding decision-making processes in the course of research. In this article, we argue that scholars should create a living codebook, which is a set of tools that documents the data analysis process. It has four parts: (1) a processual database that keeps track of initial codes and a final database for completed codes, (2) a “definitions and key terms” list for conversations about codes, (3) memo-writing, and (4) a difference list explaining the rationale behind unmatched codes. It allows researchers to interrogate taken-for-granted assumptions about what data are focused on, why, and how to analyze it. To that end, the living codebook moves beyond discussions around intercoder reliability to how analytic codes are created, refined, and debated.


2021 ◽  
Vol 20 ◽  
pp. 160940692110024
Author(s):  
Manoj Malviya ◽  
Natascha T. Buswell ◽  
Catherine G. P. Berdanier

While calculating intercoder reliability (ICR) is straightforward for text-based data, such as for interview transcript excerpts, determining ICR for naturalistic observational video data is much more complex. To date, there have been few methods proposed in literature that are robust enough to handle complexities such as the occurrence of simultaneous event complexity and partial agreement by the raters. This is especially important with the emergence of high-resolution video data, which collects nearly continuous or continuous observational data in naturalistic settings. In this paper, we present three approaches to calculating ICR. First, we present the technical approach to clean and compare two coders’ results such that traditional metrics of ICR (e.g., Cohen’s κ, Krippendorff’s α, Scott’s Π) can be calculated, methods previously unarticulated in literature. However, these calculations are intensive, requiring significant data manipulation. As an alternative, this paper also proposes two novel methods to calculate ICR by algorithmically comparing visual representations of each coders’ results. To demonstrate efficacy of the approaches, we employ all three methods on data from two separate ongoing research contexts using observational data. We find that the visual methods perform as well as the traditional measures of ICR and offer significant reduction in the work required to calculate ICR, with an added advantage of allowing the researcher to set thresholds for acceptable agreement in lag time. These methods may transform the consideration of ICR in other studies across disciplines that employ observational data.


Sign in / Sign up

Export Citation Format

Share Document