Validating Wordscores: The Promises and Pitfalls of Computational Text Scaling

13Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Wordscores is a popular computational text analysis method with numerous applications in communication research. Wordscores claims to scale documents on specified dimensions without requiring researchers to read or even understand the language of the input text. We investigate whether Wordscores delivers this claim by scaling the Euromanifestos of 117 political parties across 23 countries on 4 salient dimensions of political conflict. We assess validity by comparing the Wordscores estimates to expert surveys and other judgmental measures, and by examining the Wordscores’s estimates ability to predict party membership in the European Parliament groups. We find that the Wordscores estimates correlate poorly with expert and judgmental measures of party positions, while the latter outperform Wordscores in the predictive validity test. We conclude that Wordscores does not live up to its original claim of a “quick and easy” language blind method, and urge researchers to demonstrate the validity of the method in their domain of interest before any empirical analysis.

Cite

CITATION STYLE

APA

Bruinsma, B., & Gemenis, K. (2019). Validating Wordscores: The Promises and Pitfalls of Computational Text Scaling. Communication Methods and Measures, 13(3), 212–227. https://doi.org/10.1080/19312458.2019.1594741

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free