False information spread via the internet and social media influences public opinion and user activity, while generative models enable fake content to be generated faster and more cheaply than had previously been possible. In the not so distant future, identifying fake content generated by deep learning models will play a key role in protecting users from misinformation. To this end, a dataset containing human and computer-generated headlines was created and a user study indicated that humans were only able to identify the fake headlines in 47.8% of the cases. However, the most accurate automatic approach, transformers, achieved an overall accuracy of 85.7%, indicating that content generated from language models can be filtered out accurately.
CITATION STYLE
Maronikolakis, A., Schutze, H., & Stevenson, M. (2021). Identifying Automatically Generated Headlines using Transformers. In NLP4IF 2021 - NLP for Internet Freedom: Censorship, Disinformation, and Propaganda, Proceedings of the 4th Workshop (pp. 1–6). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.nlp4if-1.1
Mendeley helps you to discover research relevant for your work.