Duplicate documents are a pervasive problem in text datasets and can have a strong effect on unsupervised models. Methods to remove duplicate texts are typically heuristic or very expensive, so it is vital to know when and why they are needed. We measure the sensitivity of two latent semantic methods to the presence of different levels of document repetition. By artificially creating different forms of duplicate text we confirm several hypotheses about how repeated text impacts models. While a small amount of duplication is tolerable, substantial over-representation of subsets of the text may overwhelm meaningful topical patterns.
CITATION STYLE
Schofield, A., Thompson, L., & Mimno, D. (2017). Quantifying the effects of text duplication on semantic models. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2737–2747). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1290
Mendeley helps you to discover research relevant for your work.