Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost

8Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.

Abstract

State-of-the-art NLP systems represent inputs with word embeddings, but these are brittle when faced with Out-of-Vocabulary (OOV) words. To address this issue, we follow the principle of mimick-like models to generate vectors for unseen words, by learning the behavior of pre-trained embeddings using only the surface form of words. We present a simple contrastive learning framework, LOVE, which extends the word representation of an existing pre-trained language model (such as BERT), and makes it robust to OOV with few additional parameters. Extensive evaluations demonstrate that our lightweight model achieves similar or even better performances than prior competitors, both on original datasets and on corrupted variants. Moreover, it can be used in a plug-and-play fashion with FastText and BERT, where it significantly improves their robustness.

Cite

CITATION STYLE

APA

Chen, L., Varoquaux, G., & Suchanek, F. M. (2022). Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 3488–3504). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.245

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free