Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation

18Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications. Previous studies along this line primarily focused on perturbations in the natural language question side, neglecting the variability of tables. Motivated by this, we propose the Adversarial Table Perturbation (ATP) as a new attacking paradigm to measure the robustness of Text-to-SQL models. Following this proposition, we curate ADVETA, the first robustness evaluation benchmark featuring natural and realistic ATPs. All tested state-of-the-art models experience dramatic performance drops on ADVETA, revealing models' vulnerability in real-world practices. To defend against ATP, we build a systematic adversarial training example generation framework tailored for better contextualization of tabular data. Experiments show that our approach not only brings the best robustness improvement against table-side perturbations but also substantially empowers models against NL-side perturbations. We release our benchmark and code at: https://github.com/microsoft/ContextualSP.

Cite

CITATION STYLE

APA

Pi, X., Wang, B., Gao, Y., Guo, J., Li, Z., & Lou, J. G. (2022). Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 2007–2022). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.142

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free