Various semantic relatedness, similarity, and distancemeasures have been proposed in the past decade and many NLP-applications strongly rely on these semantic measures. Researchers compete for better algorithms and normally only few percentage points seem to suffice in order to prove a new measure outperforms an older one. In this paper we present a metastudy comparing various semantic measures and their correlation with human judgments. We show that the results are rather inconsistent and ask for detailed analyses as well as clarification. We argue that the definition of a shared task might bring us considerably closer to understanding the concept of semantic relatedness.
CITATION STYLE
Cramer, I. (2008). How well do semantic relatedness measures perform? A meta-study. In Semantics in Text Processing, STEP 2008 - Conference Proceedings (pp. 59–70). Association for Computational Linguistics (ACL).
Mendeley helps you to discover research relevant for your work.