An Inner Table Retriever for Robust Table Question Answering

14Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent years have witnessed the thriving of pretrained Transformer-based language models for understanding semi-structured tables, with several applications, such as Table Question Answering (TableQA). These models are typically trained on joint tables and surrounding natural language text, by linearizing table content into sequences comprising special tokens and cell information. This yields very long sequences which increase system inefficiency, and moreover, simply truncating long sequences results in information loss for downstream tasks. We propose Inner Table Retriever (ITR), a general-purpose approach for handling long tables in TableQA that extracts sub-tables to preserve the most relevant information for a question. We show that ITR can be easily integrated into existing systems to improve their accuracy with up to 1.3-4.8% and achieve state-of-the-art results in two benchmarks, i.e., 63.4% in WikiTableQuestions and 92.1% in WikiSQL. Additionally, we show that ITR makes TableQA systems more robust to reduced model capacity and to different ordering of columns and rows.

Cite

CITATION STYLE

APA

Lin, W., Blloshmi, R., Byrne, B., de Gispert, A., & Iglesias, G. (2023). An Inner Table Retriever for Robust Table Question Answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 9909–9926). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.551

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free