How well do semantic relatedness measures perform? A meta-study

18Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.

Abstract

Various semantic relatedness, similarity, and distancemeasures have been proposed in the past decade and many NLP-applications strongly rely on these semantic measures. Researchers compete for better algorithms and normally only few percentage points seem to suffice in order to prove a new measure outperforms an older one. In this paper we present a metastudy comparing various semantic measures and their correlation with human judgments. We show that the results are rather inconsistent and ask for detailed analyses as well as clarification. We argue that the definition of a shared task might bring us considerably closer to understanding the concept of semantic relatedness.

Cite

CITATION STYLE

APA

Cramer, I. (2008). How well do semantic relatedness measures perform? A meta-study. In Semantics in Text Processing, STEP 2008 - Conference Proceedings (pp. 59–70). Association for Computational Linguistics (ACL).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free