Inter-Rater Reliability: Chance-corrected Measures
The assessment of the inter-rater reliability requires the reduction of the observed agreement by chance agreement. There are several measures for this purpose. We consider some measures for nominal variables and investigate the mathematical characteristics of these measures on the basis of two mathematical models. Moreover we check if they are chance-corrected measures. Furthermore we introduce qualitative features for measures and check them on the measures. Finally we also give guidelines for the interpretation of the values of a measure and consider chance agreement as random measurement error.
agreement for nominal categories, chance-corrected measure, inter-rater agreement, inter-rater reliability, kappa statistic
Link zur Veröffentlichung