Human vs. automatic annotation regarding the task of relevance detection in social networks

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The burst of social networks and the possibility of being continuously connected has provided a fast way for information diffusion. More specifically, real-time posting allowed news and events to be reported quicker through social networks than traditional news media. However, the massive data that is daily available makes newsworthy information a needle in a haystack. Therefore, our goal is to build models that can detect journalistic relevance automatically in social networks. In order to do it, it is essential to establish a ground truth with a large number of entries that can provide a suitable basis for the learning algorithms due to the difficulty inherent to the ambiguity and wide scope associated with the concept of relevance. In this paper, we propose and compare two different methodologies to annotate posts regarding their relevance: automatic and human annotation. Preliminary results show that supervised models trained with the automatic annotation methodology tend to perform better than using human annotation in a test dataset labeled by experts.

Cite

CITATION STYLE

APA

Guimarães, N., Miranda, F., & Figueira, Á. (2018). Human vs. automatic annotation regarding the task of relevance detection in social networks. In Lecture Notes on Data Engineering and Communications Technologies (Vol. 17, pp. 922–933). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-319-75928-9_85

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free