TurkuNLP: Delexicalized pre-training of word embeddings for dependency parsing

2Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

We present the TurkuNLP entry in the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies. The system is based on the UDPipe parser with our focus being in exploring various techniques to pre-train the word embeddings used by the parser in order to improve its performance especially on languages with small training sets. The system ranked 11th among the 33 participants overall, being 8th on the small treebanks, 10th on the large treebanks, 12th on the parallel test sets, and 26th on the surprise languages.

Cite

CITATION STYLE

APA

Kanerva, J., Luotolahti, J., & Ginter, F. (2017). TurkuNLP: Delexicalized pre-training of word embeddings for dependency parsing. In CoNLL 2017 - SIGNLL Conference on Computational Natural Language Learning, Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies (pp. 119–125). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k17-3012

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free