Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning

8Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

When pre-trained contextualized embedding-based models developed for unstructured data are adapted for structured tabular data, they perform admirably. However, recent probing studies show that these models use spurious correlations, and often predict inference labels by focusing on false evidence or ignoring it altogether. To study this issue, we introduce the task of Trustworthy Tabular Reasoning, where a model needs to extract evidence to be used for reasoning, in addition to predicting the label. As a case study, we propose a two-stage sequential prediction approach, which includes an evidence extraction and an inference stage. First, we crowdsource evidence row labels and develop several unsupervised and supervised evidence extraction strategies for INFOTABS, a tabular NLI benchmark. Our evidence extraction strategy outperforms earlier baselines. On the downstream tabular inference task, using only the automatically extracted evidence as the premise, our approach outperforms prior benchmarks.

Cite

CITATION STYLE

APA

Gupta, V., Zhang, S., Vempala, A., He, Y., Choji, T., & Srikumar, V. (2022). Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 3268–3283). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.231

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free