Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study

6Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

Abstract

Large generative language models such as GPT-2 are well-known for their ability to generate text as well as their utility in supervised downstream tasks via fine-tuning. Its prevalence on the web, however, is still not well understood - if we run GPT-2 detectors across the web, what will we find? Our work is twofold: firstly we demonstrate via human evaluation that classifiers trained to discriminate between human and machine-generated text emerge as unsupervised predictors of "page quality", able to detect low quality content without any training. This enables fast bootstrapping of quality indicators in a low-resource setting. Secondly, curious to understand the prevalence and nature of low quality pages in the wild, we conduct extensive qualitative and quantitative analysis over 500 million web articles, making this the largest-scale study ever conducted on the topic.

Cite

CITATION STYLE

APA

Bahri, D., Tay, Y., Zheng, C., Brunk, C., Metzler, D., & Tomkins, A. (2021). Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study. In WSDM 2021 - Proceedings of the 14th ACM International Conference on Web Search and Data Mining (pp. 301–309). Association for Computing Machinery, Inc. https://doi.org/10.1145/3437963.3441809

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free