Table-GPT: Table Fine-tuned GPT for Diverse Table Tasks

35Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

Language models, such as GPT-3 and ChatGPT, demonstrate remarkable abilities to follow diverse human instructions and perform a wide range of tasks, using instruction fine-tuning. However, when we probe language models with a range of basic table-understanding tasks, we observe that today’s language models are still sub-optimal in many table-related tasks, likely because they are pre-trained predominantly on one-dimensional natural-language texts, whereas relational tables are two-dimensional objects. In this work, we propose a new “table fine-tuning” paradigm, where we continue to train/fine-tune language models like GPT-3.5 and ChatGPT, using diverse table-tasks synthesized from real tables as training data, which is analogous to “instruction fine-tuning”, but with the goal of enhancing language models’ ability to understand tables and perform table tasks. We show that our resulting Table-GPT models demonstrate: (1) better table-understanding capabilities, by consistently outperforming the vanilla untuned GPT-3.5 and ChatGPT, on a wide range of table tasks (data transformation, data cleaning, data imputation, table-QA, etc.), including tasks that are completely holdout and unseen during training, and (2) strong generalizability, in Table-GPT’s ability to respond to diverse human instructions to perform new and unseen table-tasks, in a manner similar to GPT-3.5 and ChatGPT .

Cite

CITATION STYLE

APA

Li, P., He, Y., Yashar, D., Cui, W., Ge, S., Zhang, H., … Chaudhuri, S. (2024). Table-GPT: Table Fine-tuned GPT for Diverse Table Tasks. Annals of the Entomological Society of America, 2(3). https://doi.org/10.1145/3654979

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free