Description

The kappa coefficient (or kappa statistic) is a measure of agreement between 2 observers interpreting data. It measures the ratio of the actual agreement between observers beyond chance divided by the potential agreement beyond chance. The number of observation choices may range from two or more. In this section we will calculate the kappa statistic when there are 3 possible interpretations per observation, giving a 3x3 table.


Observer 2

Observer 1

 

 

diagnosis 1

diagnosis 2

diagnosis 3

subtotal

diagnosis 1

a

b

c

a + b + c

diagnosis 2

d

e

f

d + e + f

diagnosis 3

g

h

i

g + h + i

subtotal

a + d + g

b + e + h

c + f + i

a + b + c + d + e + f + g + h + i

 

observed agreement  as a proportion =

= (a + e + i) / (a + b + c + d + e + f + g + h + i)

 

expected agreement by chance as a proportion =

= (((a + d + g) * (a + b + c)) + ((b + e + h) * (d + e + f)) + ((c + f + i) * (g + h + i))) / ((a + b + c + d + e + f + g + h + i)^2)

 

kappa by proportion =

= ((observed agreement as a proportion) – (expected agreement by chance as a proportion)) / (1 – (expected agreement by chance as a proportion))

 

standard deviation =

= SQRT (((observed agreement) * (1 – (observed agreement))) / (((total number) * ((1 – (expected agreement by chance))^2)))

 

95% confidence interval =

= (calculated kappa) +/- (1.96 * (standard deviation))

 

Interpretation:

• minimum value for kappa statistic: < 0

• maximum value: 1

• The higher the number, the greater the level of agreement between the 2 observers.

 

Result for Kappa

Strength of Agreement

< 0.00

poor

0.00 – 0.20

slight

0.21 – 0.40

fair

0.41 – 0.60

moderate

0.61 – 0.80

substantial

0.81 – 1.00

almost perfect

from page 165, Landis and Koch (1977)

 


To read more or access our algorithms and calculators, please log in or register.