We propose a simple yet effective approach for improving Korean word representations using additional linguistic annotation (i.e. Hanja). We employ cross-lingual transfer learning in training word representations by leveraging the fact that Hanja is closely related to Chinese. We evaluate the intrinsic quality of representations learned through our approach using the word analogy and similarity tests. In addition, we demonstrate their effectiveness on several downstream tasks, including a novel Korean news headline generation task.
CITATION STYLE
Yoo, K. M., Kim, T., & Lee, S. G. (2019). Don’t just scratch the surface: Enhancing word representations for Korean with Hanja. In EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 3528–3533). Association for Computational Linguistics. https://doi.org/10.18653/v1/d19-1358
Mendeley helps you to discover research relevant for your work.