Syntactically aware neural architectures for definition extraction

28Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.

Abstract

Automatically identifying definitional knowledge in text corpora (Definition Extraction or DE) is an important task with direct applications in, among others, Automatic Glossary Generation, Taxonomy Learning, Question Answering and Semantic Search. It is generally cast as a binary classification problem between definitional and non-definitional sentences. In this paper we present a set of neural architectures combining Convolutional and Recurrent Neural Networks, which are further enriched by incorporating linguistic information via syntactic dependencies. Our experimental results in the task of sentence classification, on two benchmarking DE datasets (one generic, one domain-specific), show that these models obtain consistent state of the art results. Furthermore, we demonstrate that models trained on cleanWikipedia-like definitions can successfully be applied to more noisy domain-specific corpora.

Cite

CITATION STYLE

APA

Espinosa-Anke, L., & Schockaert, S. (2018). Syntactically aware neural architectures for definition extraction. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 2, pp. 378–385). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-2061

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free