Domain Aligned Prefix Averaging for Domain Generalization in Abstractive Summarization

0Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Domain generalization is hitherto an underexplored area applied in abstractive summarization. Moreover, most existing works on domain generalization have sophisticated training algorithms. In this paper, we propose a lightweight, weight averaging based, Domain Aligned Prefix Averaging approach to domain generalization for abstractive summarization. Given a number of source domains, our method first trains a prefix for each one of them. These source prefixes generate summaries for a small number of target domain documents. The similarity of the generated summaries to their corresponding documents is used for calculating weights required to average source prefixes. In DAPA, prefix tuning allows for lightweight finetuning, and weight averaging allows for the computationally efficient addition of new source domains. When evaluated on four diverse summarization domains, DAPA shows comparable or better performance against the baselines, demonstrating the effectiveness of its prefix averaging scheme.

Cite

CITATION STYLE

APA

Nair, P. A., Pal, S., & Verma, P. (2023). Domain Aligned Prefix Averaging for Domain Generalization in Abstractive Summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 4696–4710). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.288

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free