We explore whether it is possible to build lighter parsers, that are statistically equivalent to their corresponding standard version, for a wide set of languages showing different structures and morphologies. As testbed, we use the Universal Dependencies and transition-based dependency parsers trained on feedforward networks. For these, most existing research assumes de facto standard embedded features and relies on pre-computation tricks to obtain speed-ups. We explore how these features and their size can be reduced and whether this translates into speed-ups with a negligible impact on accuracy. The experiments show that grand-daughter features can be removed for the majority of treebanks without a significant (negative or positive) LAS difference. They also show how the size of the embeddings can be notably reduced.
CITATION STYLE
Vilares, D., & Gómez-Rodríguez, C. (2018). Transition-based Parsing with Lighter Feed-Forward Networks. In EMNLP 2018 - 2nd Workshop on Universal Dependencies, UDW 2018 - Proceedings of the Workshop (pp. 162–172). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-6019
Mendeley helps you to discover research relevant for your work.