Interrater reliability: Completing the methods description in medical records review studies

49Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

In medical records review studies, information on the interrater reliability (IRR) of the data is seldom reported. This study assesses the IRR of data collected for a complex medical records review study. Elements selected for determining IRR included "demographic" data that require copying explicit information (e.g., gender, birth date), "free-text" data that require identifying and copying (e.g., chief complaints and diagnoses), and data that require abstractor judgment in determining what to record (e.g., whether heart disease was considered). Rates of agreement were assessed by the greatest number of answers (one to all n) that were the same. The IRR scores improved over time. At 1 month, the reliability for demographic data elements was very good, for free-text data elements was good, but for data elements requiring abstractor judgment was unacceptable (only 3.4 of six answers agreed, on average). All assessments after 6 months showed very good to excellent IRR. This study demonstrates that IRR can be evaluated and summarized, providing important information to the study investigators and to the consumer for assessing the reliability of the data and therefore the validity of the study results and conclusions. IRR information should be required for all large medical records studies. Copyright © 2005 by the Johns Hopkins Bloomberg School of Public Health. All rights reserved.

References Powered by Scopus

History of the rochester epidemiology project

1375Citations
N/AReaders
Get full text

Chart reviews in emergency medicine research: Where are the methods?

1034Citations
N/AReaders
Get full text

Comparison of vignettes, standardized patients, and chart abstraction: A prospective validation study of 3 methods for measuring quality

1018Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Validity and reliability of measurement instruments used in research

931Citations
N/AReaders
Get full text

A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction

735Citations
N/AReaders
Get full text

Looking through the retrospectoscope: Reducing bias in emergency medicine chart review studies

552Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Yawn, B. P., & Wollan, P. (2005, May 15). Interrater reliability: Completing the methods description in medical records review studies. American Journal of Epidemiology. https://doi.org/10.1093/aje/kwi122

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 16

67%

Researcher 6

25%

Professor / Associate Prof. 1

4%

Lecturer / Post doc 1

4%

Readers' Discipline

Tooltip

Medicine and Dentistry 13

57%

Nursing and Health Professions 4

17%

Computer Science 3

13%

Psychology 3

13%

Article Metrics

Tooltip
Mentions
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free