Methods Inf Med 2006; 45(05): 541-547
DOI: 10.1055/s-0038-1634116
Original Article
Schattauer GmbH

Measuring Agreement for Ordered Ratings in 3 x 3 Tables

D. Neveu
1   Université Montpellier I, UFR Médecine, Département de l´Information Médicale, Montpellier, France
,
P. Aubas
2   CHU Montpellier, Département de l´Information Médicale, Montpellier, France
,
F. Seguret
2   CHU Montpellier, Département de l´Information Médicale, Montpellier, France
,
A. Kramar
3   Centre Régional de Lutte contre le Cancer Val d´Aurelle-Paul Lamarque, Montpellier, France
,
P. Dujols
1   Université Montpellier I, UFR Médecine, Département de l´Information Médicale, Montpellier, France
2   CHU Montpellier, Département de l´Information Médicale, Montpellier, France
› Author Affiliations
Further Information

Publication History

Received: 08 March 2005

accepted: 18 December 2005

Publication Date:
07 February 2018 (online)

Summary

Objectives: When two raters consider a qualitative variable ordered according to three categories, the qualitative agreement is commonly assessed with a symmetrically weighted kappa statistic. However, these statistics can present paradoxes, since they may be insensitive to variations of either complete agreements or disagreements.

Methods: Agreement may be summarized by the relative amounts of complete agreements, partial and maximal disagreements beyond chance. Fixing the marginal totals and the trace, we computed symmetrically weighted kappa statistics and we developed a new statistic for qualitative agreements. Data sets from the literature were used to illustrate the methods.

Results: We show that agreement may be better assessed with the unweighted kappa index, κc, and a new statistic ζ, which assesses the excess of maximal disagreements with respect to the partial ones, and does not depend on a particular weighting system. When ζis equal to zero, maximal and partial disagreements beyond chance are equal. With its estimated large sample variance, we compared the values of two contingency tables.

Conclusions: The (κc, ζ) pair is sensitive to variations in agreements and/or disagreements and enables locating the difference between two qualitative agreements. The qualitative agreement is better with increasing values of κc and ζ.