Jointly learning word representations and composition functions using predicate-argument structures

27Citations
Citations of this article
124Readers
Mendeley users who have this article in their library.

Abstract

We introduce a novel compositional language model that works on Predicate- Argument Structures (PASs). Our model jointly learns word representations and their composition functions using bagof- words and dependency-based contexts. Unlike previous word-sequencebased models, our PAS-based model composes arguments into predicates by using the category information from the PAS. This enables our model to capture longrange dependencies between words and to better handle constructs such as verbobject and subject-verb-object relations. We verify this experimentally using two phrase similarity datasets and achieve results comparable to or higher than the previous best results. Our system achieves these results without the need for pretrained word vectors and using a much smaller training corpus; despite this, for the subject-verb-object dataset our model improves upon the state of the art by as much as ∼10% in relative performance.

Cite

CITATION STYLE

APA

Hashimoto, K., Stenetorp, P., Miwa, M., & Tsuruoka, Y. (2014). Jointly learning word representations and composition functions using predicate-argument structures. In EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1544–1555). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/d14-1163

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free