ROBUT: A Systematic Study of Table QA Robustness Against Human-Annotated Adversarial Perturbations

20Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite significant progress having been made in question answering on tabular data (Table QA), it's unclear whether, and to what extent existing Table QA models are robust to task-specific perturbations, e.g., replacing key question entities or shuffling table columns. To systematically study the robustness of Table QA models, we propose a benchmark called ROBUT, which builds upon existing Table QA datasets (WTQ, WIKISQL-WEAK, and SQA) and includes human-annotated adversarial perturbations in terms of table header, table content, and question. Our results indicate that both state-of-the-art Table QA models and large language models (e.g., GPT-3) with few-shot learning falter in these adversarial sets. We propose to address this problem by using large language models to generate adversarial examples to enhance training, which significantly improves the robustness of Table QA models. Our data and code is publicly available at https://github.com/yilunzhao/RobuT.

Cite

CITATION STYLE

APA

Zhao, Y., Zhao, C., Nan, L., Qi, Z., Zhang, W., Mi, B., … Radev, D. (2023). ROBUT: A Systematic Study of Table QA Robustness Against Human-Annotated Adversarial Perturbations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 6064–6081). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.334

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free