The interobserver reliability of a rating scale employed in several multicenter stroke trials was investigated. Twenty patients who had a stroke were rated with this scale by four clinical stroke fellows. Each patient was independently evaluated by one pair of observers. The degree of interrater agreement for each item on the scale was determined by calculation of the kappa statistic. Interobserver agreement was moderate to substantial for 9 of 13 items. This rating system compares favorably with other scales for which such comparisons can be made. However, the validity of this system must be established.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below