Semantic similarity measures between linguistic terms are essential in many Natural Language Processing (NLP) applications. Term similarity is most conventionally perceived as a symmetric relation. However, semantic directional (asymmetric) relations exist in lexical semantics and make symmetric similarity measures less suitable for their identification. Furthermore, directional similarity actually represents even more general conditions and is more practical in some specific NLP applications than symmetric similarity. As the footstone of similarity measures, current semantic features cannot efficiently represent large scale web text collections. Hence, we propose a new directional similarity method, considering feature representations both in linguistic and extra linguistic dimensions. We evaluate our approach on standard word similarity, reporting state-of-the-art performance on multiple datasets. Experiments show that our directional method handles both symmetric and directional semantic relations and leads to clear improvements in entity search and query expansion.
CITATION STYLE
Liu, B., Shi, X., & Jin, H. (2016). Measuring directional semantic similarity with multi-features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9931 LNCS, pp. 543–554). Springer Verlag. https://doi.org/10.1007/978-3-319-45814-4_44
Mendeley helps you to discover research relevant for your work.