We use paraphrases as a unique source of data to analyze contextualized embeddings, with a particular focus on BERT. Because paraphrases naturally encode consistent word and phrase semantics, they provide a unique lens for investigating properties of embeddings. Using the Paraphrase Database's alignments, we study words within paraphrases as well as phrase representations. We find that contextual embeddings effectively handle polysemous words, but give synonyms surprisingly different representations in many cases. We confirm previous findings that BERT is sensitive to word order, but find slightly different patterns than prior work in terms of the level of contextualization across BERT's layers.
CITATION STYLE
Burdick, L., Kummerfeld, J. K., & Mihalcea, R. (2022). Using Paraphrases to Study Properties of Contextual Embeddings. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 4558–4568). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.338
Mendeley helps you to discover research relevant for your work.