Would you rather? A new benchmark for learning machine alignment with cultural values and social preferences

11Citations
Citations of this article
106Readers
Mendeley users who have this article in their library.

Abstract

Understanding human preferences, along with cultural and social nuances, lives at the heart of natural language understanding. Concretely, we present a new task and corpus for learning alignments between machine and human preferences. Our newly introduced problem is concerned with predicting the preferable options from two sentences describing scenarios that may involve social, cultural, ethical, or moral situations. Our problem is framed as a natural language inference task with crowd-sourced preference votes by human players, obtained from a gamified voting platform. Along with the release of a new dataset of 200K data points, we benchmark several state-of-the-art neural models, along with BERT and friends on this task. Our experimental results show that current state-of-the-art NLP models still leave much room for improvement.

Cite

CITATION STYLE

APA

Tay, Y., Ong, D., Fu, J., Chan, A., Chen, N. F., Tuan, L. A., & Pal, C. (2020). Would you rather? A new benchmark for learning machine alignment with cultural values and social preferences. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 5369–5373). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.477

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free