How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation?

9Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing approaches for the Table-to-Text task suffer from issues such as missing information, hallucination and repetition. Many approaches to this problem use Reinforcement Learning (RL), which maximizes a single manually defined reward, such as BLEU. In this work, we instead pose the Table-to-Text task as Inverse Reinforcement Learning (IRL) problem. We explore using multiple interpretable unsupervised reward components that are combined linearly to form a composite reward function. The composite reward function and the description generator are learned jointly. We find that IRL outperforms strong RL baselines marginally. We further study the generalization of learned IRL rewards in scenarios involving domain adaptation. Our experiments reveal significant challenges in using IRL for this task.

Cite

CITATION STYLE

APA

Ghosh, S., Qi, Z., Chaturvedi, S., & Srivastava, S. (2021). How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation? In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 2, pp. 71–79). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-short.11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free