The Online Kappa Calculator can be used to calculate kappa--a chance-adjusted measure of agreement--for any number of cases, categories, or raters. Two variations of kappa are provided: Fleiss's (1971) fixed-marginal multirater kappa and Randolph's (2005) free-marginal multirater kappa (see Randolph, 2005; Warrens, 2010), with Gwet's (2010) variance formula. Brennan and Prediger (1981) suggest using free-marginal kappa when raters are not forced to assign a certain number of cases to each category and using fixed-marginal kappa when they are. Values of kappa can range from -1.0 to 1.0, with -1.0 indicating perfect disagreement below chance, 0.0 indicating agreement equal to chance, and 1.0 indicating perfect agreement above chance. Fleiss's (1981) rule of thumb is that kappa values less than .40 are "poor," values from .40 to .75 are "intermediate to good," and values above .05 are "excellent."
You can cut-and-paste data by clicking on the down arrow to the right of the "# of Raters" box. Once you click on the arrow, an "Import Data" window should appear in which you can paste data copied from a spreadsheet. Hit the OK button to get the results.
Alternately, you can input data manually. First, specify the number of cases, categories, and raters.A table will appear with cases in rows and categories in columns. Then, in the empty cells, input the number of raters who agreed that a certain case belongs to a certain category.
Do not leave any empty cells; input a zero if no raters agreed that a case belonged to that category. The sum of each row should equal the number of raters.
See the following articles for more information on the formulas used here:
- Brennan, R. L., & Prediger, D. J. (1981). Coefficient Kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement (41)3, 687-699.
- Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 16(5), 378-382.
- Fleiss, J. L. (1981). Statistical methods for rates and proportions. Hoboken, JN: Wiley.
- Gwet, K. L. (2010). Handbook of interrater reliability (2nd ed.). Gaithersburg, MD: Advanced Analytics.
- Randolph, J. J. (2005). Free-marginal multirater kappa: An alternative to Fleiss´ fixed-marginal multirater kappa. Paper presented at the Joensuu University Learning and Instruction Symposium 2005, Joensuu, Finland, October 14-15th, 2005. (ERIC Document Reproduction Service No. ED490661)
- Warrens, M. J. (2010). Inequalities between multi-rater kappas. Advances in Data Analysis and Classification, 4(4), 271-286. doi:10.1007/s11634-010-0073-4
Please cite as: Randolph, J. J. (2008). Online Kappa Calculator [Computer software]. Retrieved from http://justus.randolph.name/kappa
(c) 2016 Justus Randolph