Words are vectors, dependencies are matrices: Learning word embeddings from dependency graphs

9Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Distributional Semantic Models (DSMs) construct vector representations of word meanings based on their contexts. Typically, the contexts of a word are defined as its closest neighbours, but they can also be retrieved from its syntactic dependency relations. In this work, we propose a new dependency-based DSM. The novelty of our model lies in associating an independent meaning representation, a matrix, with each dependency-label. This allows it to capture specifics of the relations between words and contexts, leading to good performance on both intrinsic and extrinsic evaluation tasks. In addition to that, our model has an inherent ability to represent dependency chains as products of matrices which provides a straightforward way of handling further contexts of a word.

References Powered by Scopus

A Neural Probabilistic Language Model

5164Citations
N/AReaders
Get full text

A fast and accurate dependency parser using neural networks

1446Citations
N/AReaders
Get full text

Automatic retrieval and clustering of similar words

1006Citations
N/AReaders
Get full text

Cited by Powered by Scopus

What are the goals of distributional semantics?

19Citations
N/AReaders
Get full text

Autoencoding pixies: Amortised variational inference with graph convolutions for functional distributional semantics

7Citations
N/AReaders
Get full text

Aspect extraction with enriching word representation and post-processing rules

2Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Czarnowska, P., Emerson, G., & Copestake, A. (2019). Words are vectors, dependencies are matrices: Learning word embeddings from dependency graphs. In IWCS 2019 - Proceedings of the 13th International Conference on Computational Semantics - Long Papers (pp. 91–102). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-0408

Readers over time

‘19‘20‘21‘22‘23‘24‘2506121824

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 19

66%

Researcher 5

17%

Professor / Associate Prof. 3

10%

Lecturer / Post doc 2

7%

Readers' Discipline

Tooltip

Computer Science 27

77%

Linguistics 5

14%

Engineering 2

6%

Neuroscience 1

3%

Save time finding and organizing research with Mendeley

Sign up for free
0