Text Transformations in Contrastive Self-Supervised Learning: A Review

5Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Contrastive self-supervised learning has become a prominent technique in representation learning. The main step in these methods is to contrast semantically similar and dissimilar pairs of samples. However, in the domain of Natural Language Processing (NLP), the augmentation methods used in creating similar pairs with regard to contrastive learning (CL) assumptions are challenging. This is because, even simply modifying a word in the input might change the semantic meaning of the sentence, and hence, would violate the distributional hypothesis. In this review paper, we formalize the contrastive learning framework, emphasize the considerations that need to be addressed in the data transformation step, and review the state-of-the-art methods and evaluations for contrastive representation learning in NLP. Finally, we describe some challenges and potential directions for learning better text representations using contrastive methods.

Cite

CITATION STYLE

APA

Bhattacharjee, A., Karami, M., & Liu, H. (2022). Text Transformations in Contrastive Self-Supervised Learning: A Review. In IJCAI International Joint Conference on Artificial Intelligence (pp. 5394–5401). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/757

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free