Abstract
Summarization systems make numerous “decisions” about summary properties during inference, e.g. degree of copying, specificity and length of outputs, etc. However, these are implicitly encoded within model parameters and specific styles cannot be enforced. To address this, we introduce HYDRASUM, a new summarization architecture that extends the single decoder framework of current models to a mixture-of-experts version with multiple decoders. We show that HYDRASUM's multiple decoders automatically learn contrasting summary styles when trained under the standard training objective without any extra supervision. Through experiments on three summarization datasets (CNN, NEWSROOM and XSUM), we show that HYDRASUM provides a simple mechanism to obtain stylistically-diverse summaries by sampling from either individual decoders or their mixtures, outperforming baseline models. Finally, we demonstrate that a small modification to the gating strategy during training can enforce an even stricter style partitioning, e.g. high- vs low-abstractiveness or high- vs low-specificity, allowing users to sample from a larger area in the generation space and vary summary styles along multiple dimensions.
Cite
CITATION STYLE
Goyal, T., Rajani, N., Liu, W., & Kryściński, W. (2022). HYDRASUM: Disentangling Style Features in Text Summarization with Multi-Decoder Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 464–479). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.30
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.