Guidelines for analysis on measuring interrater reliability of nursing outcome classification
Indicators in nursing outcome classification (NOC) need to be tested for their validity and reliability. One method to measure reliability of NOC is by using interrater reliability. Kappa and percent agreement are common statistic analytical methods to be used together in measuring interrater reliability of an instrument. The reason for using these two methods at the same time is that those statistic analytical methods have easy reliability interpretation. Two possible conflicts may possibly emerge when there are asynchronies between kappa value and percent agreement. This article is aimed to provide guidance when a researcher faces these two possible conflicts. This guidance is referring to interrater reliability measurement using two raters.