Intercoder reliability, or interrater agreement (Tinsley and Weiss, 1975, 2000) refers to the level of agreement between independent coders who evaluate the same content like a text or an artifact. As an essential component of the content analysis, intercoder reliability is widely used in social sciences research, and scholars argue that without the establishment of reliability, content analysis measures are useless (Neuendorf, 2002). Some mostly used indices to calculate intercoder reliability are:

    • Holsti’s coefficient
    • Scott’s pi (p)
    • Cohen’s kappa (k)
    • Krippendorff’s alpha (a)

    Currently, on the DiVoMiner® platform, researchers can choose the appropriate index for their research from the above four options. In November 2022, our team is very glad to see another published article using DiVoMiner® for intercoder reliability test in SCIE/SSCI journal Patient Education and Counseling (Journal ISSN: 07383991, 18735134; impact factor: 3.18). In the research, Zhang, Zhou and Fei (2022) explained: “The data coding was executed by two trained coders, who completely understood the sampled conversations during the pre-coding process. They were able to agree to the classifications assigned to repetitions. Both coders used the DiVoMiner website (an online coding website) to independently code 100 repetitions from out-of-sample dialogs to test their internal reliability. In instances of unclear content, the coders arrived at a consensus after discussion and achieved a suitable internal reliability standard (Holsti’s composite reliability coefficient = 0.86). Both coders demonstrated their comprehensive apprehension of the criteria designated for each category. Subsequently, each coded 36 sets of conversations in the sample.”

    We also recommend a useful website with comprehensive and well-structured introduction of the intercoder reliability by Matthew Lombard, Associate Professor in the Department of Media & Communication at Temple University in Philadelphia. Click this link to take a look: http://matthewlombard.com/reliability/#Tinsley,%201975.

    References:

    Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Beverly Hills, CA: Sage.

    Neuendorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage.

    Tinsley, H. E. A. & Weiss, D. J. (1975). Interrater reliability and agreement of subjective judgements. Journal of Counseling Psychology, 22, 358-376.

    Tinsley, H. E. A. & Weiss, D. J. (2000). Interrater reliability and agreement. In H. E. A. Tinsley & S. D. Brown, Eds., Handbook of Applied Multivariate Statistics and Mathematical Modeling, pp. 95-124. San Diego, CA: Academic Press.