Cross-lingual model transfer has been a promising approach for inducing dependency parsers for lowresource languages where annotated treebanks are not available. The major obstacles for the model transfer approach are two-fold: 1. Lexical features are not directly transferable across languages; 2. Target languagespecific syntactic structures are difficult to be recovered. To address these two challenges, we present a novel representation learning framework for multi-source transfer parsing. Our framework allows multi-source transfer parsing using full lexical features straightforwardly. By evaluating on the Google universal dependency treebanks (v2.0), our best models yield an absolute improvement of 6.53% in averaged labeled attachment score, as compared with delexicalized multi-source transfer models. We also significantly outperform the state-of-the-art transfer system proposed most recently.
CITATION STYLE
Guo, J., Che, W., Yarowsky, D., Wang, H., & Liu, T. (2016). A representation learning framework for multi-source transfer parsing. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 2734–2740). AAAI press. https://doi.org/10.1609/aaai.v30i1.10352
Mendeley helps you to discover research relevant for your work.