Abstract
Given questions regarding some prototypical situation - such as Name something that people usually do before they leave the house for work? - a human can easily answer them via acquired experiences. There can be multiple right answers for such questions, with some more common for a situation than others. This paper introduces a new question answering dataset for training and evaluating common sense reasoning capabilities of artificial intelligence systems in such prototypical situations. The training set is gathered from an existing set of questions played in a long-running international game show - FAMILY-FEUD. The hidden evaluation set is created by gathering answers for each question from 100 crowd-workers. We also propose a generative evaluation task where a model has to output a ranked list of answers, ideally covering all prototypical answers for a question. After presenting multiple competitive baseline models, we find that human performance still exceeds model scores on all evaluation metrics with a meaningful gap, supporting the challenging nature of the task.
Cite
CITATION STYLE
Boratko, M., Li, X. L., O’Gorman, T., Das, R., Le, D., & McCallum, A. (2020). ProtoQA: A question answering dataset for prototypical common-sense reasoning. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1122–1136). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.85
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.