Hierarchical Recurrent Aggregative Generation for Few-Shot NLG

0Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

Large pretrained models enable transfer learning to low-resource domains for language generation tasks. However, previous end-to-end approaches do not account for the fact that some generation sub-tasks, specifically aggregation and lexicalisation, can benefit from transfer learning to different extents. To exploit these varying potentials for transfer learning, we propose a new hierarchical approach for few-shot and zero-shot generation. Our approach consists of a three-moduled jointly trained architecture: the first module independently lexicalises the distinct units of information in the input as sentence sub-units (e.g. phrases), the second module recurrently aggregates these sub-units to generate a unified intermediate output, while the third module subsequently post-edits it to generate a coherent and fluent final text. We perform extensive empirical analysis and ablation studies on few-shot and zero-shot settings across 4 datasets. Automatic and human evaluation shows that the proposed hierarchical approach is consistently capable of achieving state-of-the-art results when compared to previous work.

Cite

CITATION STYLE

APA

Zhou, G., Lampouras, G., & Iacobacci, I. (2022). Hierarchical Recurrent Aggregative Generation for Few-Shot NLG. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2167–2181). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.170

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free