In cross-lingual text classification, it is required that task-specific training data in high-resource source languages are available, where the task is identical to that of a low-resource target language. However, collecting such training data can be infeasible because of the labeling cost, task characteristics, and privacy concerns. This paper proposes an alternative solution that uses only task-independent word embeddings of high-resource languages and bilingual dictionaries. First, we construct a dictionary-based heterogeneous graph (DHG) from bilingual dictionaries. This opens the possibility to use graph neural networks for crosslingual transfer. The remaining challenge is the heterogeneity of DHG because multiple languages are considered. To address this challenge, we propose dictionary-based heterogeneous graph neural network (DHGNet) that effectively handles the heterogeneity of DHG by two-step aggregations, which are word-level and language-level aggregations. Experimental results demonstrate that our method outperforms pretrained models even though it does not access to large corpora. Furthermore, it can perform well even though dictionaries contain many incorrect translations. Its robustness allows the usage of a wider range of dictionaries such as an automatically constructed dictionary and crowdsourced dictionary, which are convenient for real-world applications.
CITATION STYLE
Chairatanakul, N., Sriwatanasakdi, N., Charoenphakdee, N., Liu, X., & Murata, T. (2021). Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 1504–1517). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.130
Mendeley helps you to discover research relevant for your work.