Abstract
Scientific topics, claims and resources are increasingly debated as part of online discourse, where prominent examples include discourse related to COVID-19 or climate change. This has led to both significant societal impact and increased interest in scientific online discourse from various disciplines. For instance, communication studies aim at a deeper understanding of biases, quality or spreading patterns of scientific information, whereas computational methods have been proposed to extract, classify or verify scientific claims using NLP and IR techniques. However, research across disciplines currently suffers from both a lack of robust definitions of the various forms of science-relatedness as well as appropriate ground truth data for distinguishing them. In this work, we contribute (a) an annotation framework and corresponding definitions for different forms of scientific relatedness of online discourse in tweets, (b) an expert-annotated dataset of 1261 tweets obtained through our labeling framework reaching an average Fleiss Kappa κ of 0.63, (c) a multi-label classifier trained on our data able to detect science- relatedness with 89% F1 and also able to detect distinct forms of scientific knowledge (claims, references). With this work, we aim to lay the foundation for developing and evaluating robust methods for analysing science as part of large-scale online discourse.
Author supplied keywords
Cite
CITATION STYLE
Hafid, S., Schellhammer, S., Bringay, S., Todorov, K., & Dietze, S. (2022). SciTweets - A Dataset and Annotation Framework for Detecting Scientific Online Discourse. In International Conference on Information and Knowledge Management, Proceedings (pp. 3988–3992). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557693
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.