Comparing human and algorithm performance on estimating word-based semantic similarity

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Understanding natural language is an inherently complex task for computer algorithms. Crowdsourcing natural language tasks such as semantic similarity is therefore a promising approach. In this paper, we investigate the performance of crowdworkers and compare them to offline contributors as well as to state of the art algorithms. We will illustrate that algorithms do outperform single human contributors but still cannot compete with results gathered from groups of contributors. Furthermore, we will demonstrate that this effect is persistent across different contributor populations. Finally, we give guidelines for easing the challenge of collecting word based semantic similarity data from human contributors.

Cite

CITATION STYLE

APA

Batram, N., Krause, M., & Dehaye, P. O. (2015). Comparing human and algorithm performance on estimating word-based semantic similarity. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8852, pp. 452–460). Springer Verlag. https://doi.org/10.1007/978-3-319-15168-7_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free