While large-scale pretraining has achieved great success in many NLP tasks, it has not been fully studied whether external linguistic knowledge can improve data-driven models. In this work, we introduce sememe knowledge into Transformer and propose three sememe-enhanced Transformer models. Sememes, by linguistic definition, are the minimum semantic units of language, which can well represent implicit semantic meanings behind words. Our experiments demonstrate that introducing sememe knowledge into Transformer can consistently improve language modeling and downstream tasks. The adversarial test further demonstrates that sememe knowledge can substantially improve model robustness.
CITATION STYLE
Zhang, Y., Yang, C., Zhou, Z., & Liu, Z. (2020). Enhancing transformer with sememe knowledge. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 177–184). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.repl4nlp-1.21
Mendeley helps you to discover research relevant for your work.