Crowdsourcing in Article Evaluation

  • Peters I
  • Haustein S
  • Terliesner J
N/ACitations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

Qualitative journal evaluation makes use of cumulated content descriptions of single articles. These can either be represented by author-generated keywords, professionally indexed subject headings, automatically extracted terms or by reader-generated tags as used in social bookmarking systems. It is assumed that particularly the users’ view on article content differs significantly from the authors’ or indexers’ perspectives. To verify assumption, title and abstract terms, author keywords, Inspec subject headings, KeyWords PlusTM and tags are compared by calculating the overlap between the respective datasets. Our approach includes extensive term preprocessing (i.e. stemming, spelling unifications) to gain a homogeneous term collection. When term overlap is calculated for every single document of the dataset, similarity values are low. Thus, the presented study confirms the assumption, that the different types of keywords each reflect a different perspective of the articles’ contents and that tags (cumulated across articles) can be used in journal evaluation to represent a reader-specific view on published content.

Cite

CITATION STYLE

APA

Peters, I., Haustein, S., & Terliesner, J. (2011). Crowdsourcing in Article Evaluation. In Proceedings of the 3rd ACM International Conference on Web Science (pp. 2–5). Koblenz, Germany.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free