Impact of survey design features on score reliability

1Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The a priori impact of survey design and implementation tactics on score reliability is not well-understood. Using a two-by-two-by-two cluster randomized post-test only experimental design, the Cronbach’s coefficient alpha of internal consistency reliability of scores on three personality scales is calculated. The experimental conditions are presence versus absence of quality control items, anonymous versus confidential administration conditions, and randomly scrambled items versus grouped survey items. Alpha was calculated for each of the eight treatment groups. Hakstian and Whalen’s (1976) formulae were used to calculate the standard deviation of alpha. These summary data were then used in analysis of variance tests. The ANOVA results were mixed for the three personality scales. The use of quality control items had no impact on alpha on any scale, confidentiality improved alpha on one scale and decreased it on two others, and grouping items together improved alpha on two scales and decreased it on another. Although most of the exploratory interaction tests for each scale were statistically significant, none were in the direction implied by the confluence of main effect hypotheses. These mixed results suggest that a priori machinations by survey designers and administrators may often result in unwanted differences in score reliability.

Cite

CITATION STYLE

APA

Miller, B. K., & Simmering, M. (2020, November 23). Impact of survey design features on score reliability. Collabra: Psychology. University of California Press. https://doi.org/10.1525/collabra.17975

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free