Learning word representations from relational graphs

14Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Attributes of words and relations between two words are central to numerous tasks in Artificial Intelligence such as knowledge representation, similarity measurement, and analogy detection. Often when two words share one or more attributes in common, they are connected by some semantic relations. On the other hand, if there are numerous semantic relations between two words, we can expect some of the attributes of one of the words to be inherited by the other. Motivated by this close connection between attributes and relations, given a relational graph in which words are interconnected via numerous semantic relations, we propose a method to learn a latent representation for the individual words. The proposed method considers not only the co-occurrences of words as done by existing approaches for word representation learning, but also the semantic relations in which two words co-occur. To evaluate the accuracy of the word representations learnt using the proposed method, we use the learnt word representations to solve semantic word analogy problems. Our experimental results show that it is possible to learn better word representations by using semantic semantics between words.

Cite

CITATION STYLE

APA

Bollegala, D., Maehara, T., Yoshida, Y., & Kawarabayashi, K. I. (2015). Learning word representations from relational graphs. In Proceedings of the National Conference on Artificial Intelligence (Vol. 3, pp. 2146–2152). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9494

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free