WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach

48Citations
Citations of this article
140Readers
Mendeley users who have this article in their library.

Abstract

Producing the embedding of a sentence in an unsupervised way is valuable to natural language matching and retrieval problems in practice. In this work, we conduct a thorough examination of pretrained model based unsupervised sentence embeddings. We study on four pretrained models and conduct massive experiments on seven datasets regarding sentence semantics. We have three main findings. First, averaging all tokens is better than only using [CLS] vector. Second, combining both top and bottom layers is better than only using top layers. Lastly, an easy whitening-based vector normalization strategy with less than 10 lines of code consistently boosts the performance.

Cite

CITATION STYLE

APA

Huang, J., Tang, D., Zhong, W., Lu, S., Shou, L., Gong, M., … Duan, N. (2021). WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 238–244). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free