Auditing Cross-Cultural Consistency of Human-Annotated Labels for Recommendation Systems

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recommendation systems increasingly depend on massive human-labeled datasets; however, the human annotators hired to generate these labels increasingly come from homogeneous backgrounds. This poses an issue when downstream predictive models - based on these labels - are applied globally to a heterogeneous set of users. We study this disconnect with respect to the labels themselves, asking whether they are "consistently conceptualized"across annotators of different demographics. In a case study of video game labels, we conduct a survey on 5,174 gamers, identify a subset of inconsistently conceptualized game labels, perform causal analyses, and suggest both cultural and linguistic reasons for cross-country differences in label annotation. We further demonstrate that predictive models of game annotations perform better on global train sets as opposed to homogeneous (single-country) train sets. Finally, we provide a generalizable framework for practitioners to audit their own data annotation processes for consistent label conceptualization, and encourage practitioners to consider global inclusivity in recommendation systems starting from the early stages of annotator recruitment and data-labeling.

Cite

CITATION STYLE

APA

Pang, R. Y., Cenatempo, J., Graham, F., Kuehn, B., Whisenant, M., Botchway, P., … Koenecke, A. (2023). Auditing Cross-Cultural Consistency of Human-Annotated Labels for Recommendation Systems. In ACM International Conference Proceeding Series (pp. 1531–1552). Association for Computing Machinery. https://doi.org/10.1145/3593013.3594098

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free