Statistical methodology: II. Reliability and variability assessment in study design, Part A

  • Karras D
  • 2

    Readers

    Mendeley users who have this article in their library.
  • N/A

    Citations

    Citations of this article.

Abstract

Assessment of test reliability and validity is often complex. Although tests of correlation are frequently used to measure interest agreement, such indexes measure only the strength of the linear relationship between variables and may not provide an accurate assessment of the correspondence between test results. Inspection of interest differences, either visually or using the r1, may provide a better indicator of the correspondence between test results and accounts for measurement biases. Strength of association between categorical variables can be measured using related tests such as the kappa statistic. Test reliability may be assessed by retesting, but this is not practical in many cases when subject memory or learning may confound the results of repeated examinations. Several methods exist for determining reliability from a single test administration and for assessing the correspondence between answers to homogeneous test questions. In the continuation article (Part B) on this subject, the concept and assessment of validity will be examined in more detail, and techniques for maximizing the reliability and validity of questionnaires will be discussed

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • D Karras

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free