Abstract
We present a syntactic parser training paradigm that learns from large scale Knowledge Bases. By utilizing the Knowledge Base context only during training, the resulting parser has no inference-time dependency on the Knowledge Base, thus not decreasing the speed during prediction. Knowledge Base information is injected into the model using an extension to the Augmented-loss training framework. We present empirical results that show this approach achieves a significant gain in accuracy for syntactic categories such as coordination and apposition.
Cite
CITATION STYLE
Gesmundo, A., & Hall, K. B. (2014). Projecting the Knowledge Graph to Syntactic Parsing. In EACL 2014 - 14th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 28–32). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/e14-4006
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.