Sensing and learning human annotators engaged in narrative sensemaking

1Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

While labor issues and quality assurance in crowdwork are increasingly studied, how annotators make sense of texts and how they are personally impacted by doing so are not. We study these questions via a narrative-sorting annotation task, where carefully selected (by sequentiality, topic, emotional content, and length) collections of tweets serve as examples of everyday storytelling. As readers process these narratives, we measure their facial expressions, galvanic skin response, and self-reported reactions. From the perspective of annotator well-being, a reassuring outcome was that the sorting task did not cause a measurable stress response, however readers reacted to humor. In terms of sensemaking, readers were more confident when sorting sequential, target-topical, and highly emotional tweets. As crowdsourcing becomes more common, this research sheds light onto the perceptive capabilities and emotional impact of human readers.

Cite

CITATION STYLE

APA

Tornblad, M. K., Lapresi, L., Homan, C. M., Ptucha, R. W., & Alm, C. O. (2018). Sensing and learning human annotators engaged in narrative sensemaking. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Student Research Workshop (Vol. 2018-January, pp. 136–143). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-4019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free