Text Embeddings Reveal (Almost) As Much As Text

56Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.
Get full text

Abstract

How much private information do text embeddings reveal about the original text? We investigate the problem of embedding inversion, reconstructing the full text represented in dense text embeddings. We frame the problem as controlled generation: generating text that, when reembedded, is close to a fixed point in latent space. We find that although a naïve model conditioned on the embedding performs poorly, a multi-step method that iteratively corrects and re-embeds text is able to recover 92% of 32-token text inputs exactly. We train our model to decode text embeddings from two state-of-the-art embedding models, and also show that our model can recover important personal information (full names) from a dataset of clinical notes..

Cite

CITATION STYLE

APA

Morris, J. X., Kuleshov, V., Shmatikov, V., & Rush, A. M. (2023). Text Embeddings Reveal (Almost) As Much As Text. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 12448–12460). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.765

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free