Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering

5Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose Chain-of-Questions, a framework that trains a model to robustly answer multistep questions by generating and answering sub-questions. We obtain supervision for sub-questions from human-annotated question decomposition meaning representation (QDMR), but QDMR does not include annotated answers to sub-questions. To overcome this technical challenge, we treat sub-answers as latent variables and infer them with a novel dynamic mixture of Hard-EM and MAPO. Chain-of-Questions is effective and robust, greatly outperforming strong neuro-symbolic methods by 9.0 F1 on a DROP contrast set and GPT-3.5 by 24.3 F1 on a HOTPOTQA adversarial set.

Cite

CITATION STYLE

APA

Zhu, W., Thomason, J., & Jia, R. (2023). Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 8845–8860). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.547

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free