Simple and Effective Unsupervised Redundancy Elimination to Compress Dense Vectors for Passage Retrieval

15Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Recent work has shown that dense passage retrieval techniques achieve better ranking accuracy in open-domain question answering compared to sparse retrieval techniques such as BM25, but at the cost of large space and memory requirements. In this paper, we analyze the redundancy present in encoded dense vectors and show that the default dimension of 768 is unnecessarily large. To improve space efficiency, we propose a simple unsupervised compression pipeline that consists of principal component analysis (PCA), product quantization, and hybrid search. We further investigate other supervised baselines and find surprisingly that unsupervised PCA outperforms them in some settings. We perform extensive experiments on five question answering datasets and demonstrate that our best pipeline achieves good accuracy-space trade-offs, for example, 48× compression with less than 3% drop in top-100 retrieval accuracy on average or 96× compression with less than 4% drop. Code and data are available at http://pyserini.io/.

Cite

CITATION STYLE

APA

Ma, X., Li, M., Sun, K., Xin, J., & Lin, J. (2021). Simple and Effective Unsupervised Redundancy Elimination to Compress Dense Vectors for Passage Retrieval. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2854–2859). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.227

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free