Interrater reliability of the nih stroke scale

  • Goldstein L
  • Bertels C
  • Davis J
  • 95

    Readers

    Mendeley users who have this article in their library.
  • 550

    Citations

    Citations of this article.

Abstract

The interobserver reliability of a rating scale employed in several multicenter stroke trials was investigated. Twenty patients who had a stroke were rated with this scale by four clinical stroke fellows. Each patient was independently evaluated by one pair of observers. The degree of interrater agreement for each item on the scale was determined by calculation of the kappa statistic. Interobserver agreement was moderate to substantial for 9 of 13 items. This rating system compares favorably with other scales for which such comparisons can be made. However, the validity of this system must be established.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Get full text

Authors

  • Larry B. Goldstein

  • Christina Bertels

  • James N. Davis

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free