Criteria for collapsing rating scale responses: A case study of the class

9Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Assessments of students’ attitudes and beliefs often rely on questions with rating scales that ask students the extent to which they agree or disagree with a statement. Unlike traditional physics problems with a single correct answer, rating scale questions often have a spectrum of 5 or more responses, none of which are correct. Researchers have found that responses on rating scale items can generally be treated as continuous and that unless there is good evidence to do otherwise, response categories should not be collapsed [1–3]. We discuss two potential reasons for collapsing response categories (lack of use and redundancy) and how to empirically test for them. To illustrate these methods, we use them on the Colorado Learning Attitudes about Science Survey. We found that students used all the response categories on the CLASS but that three of them were potentially redundant. This led us to conclude that the CLASS should be scored on a 5-point or 3-point scale, rather than the 2-point scale recommended by the instrument developers [4]. More broadly, we recommend the judicious use of data manipulations when scoring assessments and retaining all response categories unless there is a strong rational for collapsing them.

Cite

CITATION STYLE

APA

Van Dusen, B., & Nissen, J. (2019). Criteria for collapsing rating scale responses: A case study of the class. In Physics Education Research Conference Proceedings (pp. 585–590). American Association of Physics Teachers. https://doi.org/10.1119/perc.2019.pr.Van_Dusen

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free