Automatically identifying definitional knowledge in text corpora (Definition Extraction or DE) is an important task with direct applications in, among others, Automatic Glossary Generation, Taxonomy Learning, Question Answering and Semantic Search. It is generally cast as a binary classification problem between definitional and non-definitional sentences. In this paper we present a set of neural architectures combining Convolutional and Recurrent Neural Networks, which are further enriched by incorporating linguistic information via syntactic dependencies. Our experimental results in the task of sentence classification, on two benchmarking DE datasets (one generic, one domain-specific), show that these models obtain consistent state of the art results. Furthermore, we demonstrate that models trained on cleanWikipedia-like definitions can successfully be applied to more noisy domain-specific corpora.
CITATION STYLE
Espinosa-Anke, L., & Schockaert, S. (2018). Syntactically aware neural architectures for definition extraction. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 2, pp. 378–385). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-2061
Mendeley helps you to discover research relevant for your work.