Enhancing transformer with sememe knowledge

6Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

While large-scale pretraining has achieved great success in many NLP tasks, it has not been fully studied whether external linguistic knowledge can improve data-driven models. In this work, we introduce sememe knowledge into Transformer and propose three sememe-enhanced Transformer models. Sememes, by linguistic definition, are the minimum semantic units of language, which can well represent implicit semantic meanings behind words. Our experiments demonstrate that introducing sememe knowledge into Transformer can consistently improve language modeling and downstream tasks. The adversarial test further demonstrates that sememe knowledge can substantially improve model robustness.

Cite

CITATION STYLE

APA

Zhang, Y., Yang, C., Zhou, Z., & Liu, Z. (2020). Enhancing transformer with sememe knowledge. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 177–184). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.repl4nlp-1.21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free