A lot of work has been done in the field of image compression via machine learning, but not much attention has been given to the compression of natural language. Compressing text into lossless representations while making features easily retrievable is not a trivial task, yet has huge benefits. Most methods designed to produce feature rich sentence embeddings focus solely on performing well on downstream tasks and are unable to properly reconstruct the original sequence from the learned embedding. In this work, we propose a near lossless method for encoding long sequences of texts as well as all of their sub-sequences into feature rich representations. We test our method on sentiment analysis and show good performance across all sub-sentence and sentence embeddings.
CITATION STYLE
Prato, G., Duchesneau, M., Chandar, S., & Tapp, A. (2020). Towards lossless encoding of sentences. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1577–1583). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1153
Mendeley helps you to discover research relevant for your work.