Generalized tuning of distributional word vectors for monolingual and cross-lingual lexical entailment

15Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.

Abstract

Lexical entailment (LE; also known as hyponymy-hypernymy or is-a relation) is a core asymmetric lexical relation that supports tasks like taxonomy induction and text generation. In this work, we propose a simple and effective method for fine-tuning distributional word vectors for LE. Our Generalized Lexical ENtailment model (GLEN) is decoupled from the word embedding model and applicable to any distributional vector space. Yet - unlike existing retrofitting models - it captures a general specialization function allowing for LE-tuning of the entire distributional space and not only the vectors of words seen in lexical constraints. Coupled with a multilingual embedding space, GLEN seamlessly enables cross-lingual LE detection. We demonstrate the effectiveness of GLEN in graded LE and report large improvements (over 20% in accuracy) over state-of-the-art in cross-lingual LE detection.

Cite

CITATION STYLE

APA

Glavaš, G., & Vulic, I. (2020). Generalized tuning of distributional word vectors for monolingual and cross-lingual lexical entailment. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 4824–4830). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1476

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free