Logic-guided data augmentation and regularization for consistent question answering

69Citations
Citations of this article
201Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many natural language questions require qualitative, quantitative or logical comparisons between two entities or events. This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions by integrating logic rules and neural models. Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model. Improving the global consistency of predictions, our approach achieves large improvements over previous methods in a variety of question answering (QA) tasks including multiple-choice qualitative reasoning, cause-effect reasoning, and extractive machine reading comprehension. In particular, our method significantly improves the performance of RoBERTa-based models by 1-5% across datasets. We advance state of the art by around 5-8% on WIQA and QuaRel and reduce consistency violations by 58% on HotpotQA. We further demonstrate that our approach can learn effectively from limited data.

Cite

CITATION STYLE

APA

Asai, A., & Hajishirzi, H. (2020). Logic-guided data augmentation and regularization for consistent question answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 5642–5650). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.499

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free