Measuring repeatability and validity of histological diagnosis - A brief review with some practical examples

107Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Evaluation of histological diagnosis requires an index of agreement (to measure repeatability and validity) together with a method of assessing bias. Cohen's kappa statistic appears to be the most suitable tool for measuring levels of agreement, which if unsatisfactory may be caused by bias. Further study of bias is possible by examining levels of agreement for each diagnostic category or by searching for categories of disagreement in which more observations occur than would be expected by chance alone. This article gives reasons for choosing the kappa statistic, with examples illustrating its calculation and the investigation of bias.

Cite

CITATION STYLE

APA

Silcocks, P. B. S. (1983). Measuring repeatability and validity of histological diagnosis - A brief review with some practical examples. Journal of Clinical Pathology. BMJ Publishing Group. https://doi.org/10.1136/jcp.36.11.1269

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free