The textual similarity is a crucial aspect for many extractive text summarization methods. A bag-of-words representation does not allow to grasp the semantic relationships between concepts when comparing strongly related sentences with no words in common. To overcome this issue, in this paper we propose a centroid-based method for text summarization that exploits the compositional capabilities of word embeddings. The evaluations on multi-document and multilingual datasets prove the effectiveness of the continuous vector representation of words compared to the bag-of-words model. Despite its simplicity, our method achieves good performance even in comparison to more complex deep learning models. Our method is unsupervised and it can be adopted in other summarization tasks.
CITATION STYLE
Rossiello, G., Basile, P., & Semeraro, G. (2017). Centroid-based Text Summarization through Compositionality of Word Embeddings. In MultiLing 2017 - Workshop on Summarization and Summary Evaluation Across Source Types and Genres, Proceedings of the Workshop (pp. 12–21). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-1003
Mendeley helps you to discover research relevant for your work.