Efficient Document Embeddings via Self-Contrastive Bregman Divergence Learning

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Learning quality document embeddings is a fundamental problem in natural language processing (NLP), information retrieval (IR), recommendation systems, and search engines. Despite recent advances in the development of transformer-based models that produce sentence embeddings with self-contrastive learning, the encoding of long documents (Ks of words) is still challenging with respect to both efficiency and quality considerations. Therefore, we train Longfomer-based document encoders using a state-of-the-art unsupervised contrastive learning method (SimCSE). Further on, we complement the baseline method - siamese neural network- with additional convex neural networks based on functional Bregman divergence aiming to enhance the quality of the output document representations. We show that overall the combination of a self-contrastive siamese network and our proposed neural Bregman network outperforms the baselines in two linear classification settings on three long document topic classification tasks from the legal and biomedical domains.

Cite

CITATION STYLE

APA

Saggau, D., Rezaei, M., Bischl, B., & Chalkidis, I. (2023). Efficient Document Embeddings via Self-Contrastive Bregman Divergence Learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 12181–12190). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.771

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free