A generative model ofwords and relationships from multiple sources

4Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

Neural language models are a powerful tool to embed words into semantic vector spaces. However, learning such models generally relies on the availability of abundant and diverse training examples. In highly specialised domains this requirement may not be met due to difficulties in obtaining a large corpus, or the limited range of expression in average use. Such domains may encode prior knowledge about entities in a knowledge base or ontology.We propose a generative model which integrates evidence from diverse data sources, enabling the sharing of semantic information. We achieve this by generalising the concept of co-occurrence from distributional semantics to include other relationships between entities or words, which we model as affine transformations on the embedding space. We demonstrate the effectiveness of this approach by outperforming recent models on a link prediction task and demonstrating its ability to profit from partially or fully unobserved data training labels. We further demonstrate the usefulness of learning from different data sources with overlapping vocabularies.

Cite

CITATION STYLE

APA

Hyland, S. L., Karaletsos, T., & Rätsch, G. (2016). A generative model ofwords and relationships from multiple sources. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 2622–2629). AAAI press. https://doi.org/10.1609/aaai.v30i1.10335

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free