MVP-BERT: Multi-vocab pre-training for Chinese BERT

7Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.

Abstract

Despite the development of pre-trained language models (PLMs) significantly raise the performances of various Chinese natural language processing (NLP) tasks, the vocabulary (vocab) for these Chinese PLMs remains to be the one provided by Google Chinese BERT (Devlin et al., 2019), which is based on Chinese characters (chars). Second, the masked language model pre-training is based on a single vocab, limiting its downstream task performances. In this work, we first experimentally demonstrate that building a vocab via Chinese word segmentation (CWS) guided sub-word tokenization (SGT) can improve the performances of Chinese PLMs. Then we propose two versions of multi-vocab pre-training (MVP), Hi-MVP and AL-MVP, to improve the models' expressiveness. Experiments show that: (a) MVP training strategies improve PLMs' downstream performances, especially it can improve the PLM's performances on span-level tasks; (b) our AL-MVP outperforms the recent AMBERT (Zhang & Li, 2020) after large-scale pre-training, and it is more robust against adversarial attacks.

Cite

CITATION STYLE

APA

Zhu, W. (2021). MVP-BERT: Multi-vocab pre-training for Chinese BERT. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Student Research Workshop (pp. 260–269). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-srw.27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free