Enhancing pre-trained Chinese character representation with word-aligned attention

16Citations
Citations of this article
134Readers
Mendeley users who have this article in their library.

Abstract

Most Chinese pre-trained models take character as the basic unit and learn representation according to character's external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful utterance in Chinese. Hence, we propose a novel word-aligned attention to exploit explicit word information, which is complementary to various character-based Chinese pre-trained language models. Specifically, we devise a pooling mechanism to align the character-level attention to the word level and propose to alleviate the potential issue of segmentation error propagation by multi-source information fusion. As a result, word and character information are explicitly integrated at the fine-tuning procedure. Experimental results on five Chinese NLP benchmark tasks demonstrate that our method achieves significant improvements against BERT, ERNIE and BERT-wwm.

Cite

CITATION STYLE

APA

Li, Y., Yu, B., Xue, M., & Liu, T. (2020). Enhancing pre-trained Chinese character representation with word-aligned attention. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3442–3448). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.315

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free