Abstract
Recent advances in text autoencoders have significantly improved the quality of the latent space, which enables models to generate grammatical and consistent text from aggregated latent vectors. As a successful application of this property, unsupervised opinion summarization models generate a summary by decoding the aggregated latent vectors of inputs. More specifically, they perform the aggregation via simple average. However, little is known about how the vector aggregation step affects the generation quality. In this study, we revisit the commonly used simple average approach by examining the latent space and generated summaries. We found that text autoencoders tend to generate overly generic summaries from simply averaged latent vectors due to an unexpected L2-norm shrinkage in the aggregated latent vectors, which we refer to as summary vector degeneration. To overcome this issue, we develop a framework COOP, which searches input combinations for the latent vector aggregation using input-output word overlap. Experimental results show that COOP successfully alleviates the summary vector degeneration issue and establishes new state-of-theart performance on two opinion summarization benchmarks. Code is available at https://github.com/megagonlabs/coop.
Cite
CITATION STYLE
Iso, H., Wang, X., Suhara, Y., Angelidis, S., & Tan, W. C. (2021). Convex Aggregation for Opinion Summarization. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 3885–3903). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.328
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.