A representation learning framework for multi-source transfer parsing

60Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

Cross-lingual model transfer has been a promising approach for inducing dependency parsers for lowresource languages where annotated treebanks are not available. The major obstacles for the model transfer approach are two-fold: 1. Lexical features are not directly transferable across languages; 2. Target languagespecific syntactic structures are difficult to be recovered. To address these two challenges, we present a novel representation learning framework for multi-source transfer parsing. Our framework allows multi-source transfer parsing using full lexical features straightforwardly. By evaluating on the Google universal dependency treebanks (v2.0), our best models yield an absolute improvement of 6.53% in averaged labeled attachment score, as compared with delexicalized multi-source transfer models. We also significantly outperform the state-of-the-art transfer system proposed most recently.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Guo, J., Che, W., Yarowsky, D., Wang, H., & Liu, T. (2016). A representation learning framework for multi-source transfer parsing. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 2734–2740). AAAI press. https://doi.org/10.1609/aaai.v30i1.10352

Readers over time

‘15‘16‘17‘18‘19‘20‘21‘22‘24036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 22

69%

Researcher 6

19%

Professor / Associate Prof. 2

6%

Lecturer / Post doc 2

6%

Readers' Discipline

Tooltip

Computer Science 29

88%

Business, Management and Accounting 2

6%

Physics and Astronomy 1

3%

Engineering 1

3%

Save time finding and organizing research with Mendeley

Sign up for free
0