What question answering can learn from trivia nerds

26Citations
Citations of this article
138Readers
Mendeley users who have this article in their library.

Abstract

Question answering (QA)is not just building systems; this NLP subfield also creates and curates challenging question datasets that reveal the best systems. We argue that QA datasets-and QA leaderboards-closely resemble trivia tournaments: the questions agents-humans or machines-answer reveals a “winner”. However, the research community has ignored the lessons from decades of the trivia community creating vibrant, fair, and effective QA competitions. After detailing problems with existing QA datasets, we outline several lessons that transfer to QA research: removing ambiguity, identifying better QA agents, and adjudicating disputes.

Cite

CITATION STYLE

APA

Boyd-Graber, J., & Börschinger, B. (2020). What question answering can learn from trivia nerds. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 7422–7435). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.662

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free