Break it down: A question understanding benchmark

153Citations
Citations of this article
251Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Understanding natural language questions entails the ability to break down a question into the requisite steps for computing its answer. In this work, we introduce a Question Decomposition Meaning Representation (QDMR) for questions. QDMR constitutes the ordered list of steps, expressed through natural language, that are necessary for answering a question. We develop a crowdsourcing pipeline, showing that quality QDMRs can be annotated at scale, and release the BREAK dataset, containing over 83K pairs of questions and their QDMRs. We demonstrate the utility of QDMR by showing that (a) it can be used to improve open-domain question answering on the HOTPOTQA dataset, (b) it can be determin-istically converted to a pseudo-SQL formal language, which can alleviate annotation in semantic parsing applications. Last, we use BREAK to train a sequence-to-sequence model with copying that parses questions into QDMR structures, and show that it substantially outperforms several natural baselines.

Cite

CITATION STYLE

APA

Wolfson, T., Geva, M., Gupta, A., Gardner, M., Goldberg, Y., Deutch, D., & Berant, J. (2020). Break it down: A question understanding benchmark. Transactions of the Association for Computational Linguistics, 8, 183–198. https://doi.org/10.1162/tacl_a_00309

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free