TA-SBERT: Token Attention Sentence-BERT for Improving Sentence Representation

16Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A sentence embedding vector can be obtained by connecting a global average pooling (GAP) to a pre-trained language model. The problem of such a sentence embedding vector using a GAP is that it is generated with the same weight for all words appearing in the sentence. We propose a novel sentence embedding-method-based model Token Attention-SentenceBERT (TA-SBERT) to address this problem. The rationale of TA-SBERT is to enhance the performance of sentence embedding by introducing three strategies. First, we convert the base form while preprocessing the input sentence to reduce misunderstanding. Second, we propose a novel Token Attention (TA) technique that distinguishes important words to produce more informative sentence vectors. Third, we increase stability of fine-tuning to avoid catastrophic forgetting by adding a reconstruction loss to the word embedding vector. Extensive ablation studies demonstrate that our TA-SBERT outperforms the original SentenceBERT (SBERT) in the sentence vector evaluation using semantic textual similarity (STS) tasks and the SentEval toolkit.

Cite

CITATION STYLE

APA

Seo, J., Lee, S., Liu, L., & Choi, W. (2022). TA-SBERT: Token Attention Sentence-BERT for Improving Sentence Representation. IEEE Access, 10, 39119–39128. https://doi.org/10.1109/ACCESS.2022.3164769

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free