Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?

0Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For Pretrained Language Models (PLMs), their susceptibility to noise has recently been linked to subword segmentation. However, it is unclear which aspects of segmentation affect their understanding. This study assesses the robustness of PLMs against various disrupted segmentation caused by noise. An evaluation framework for subword segmentation, named Contrastive Lexical Semantic (CoLeS) probe, is proposed. It provides a systematic categorization of segmentation corruption under noise and evaluation protocols by generating contrastive datasets with canonical-noisy word pairs. Experimental results indicate that PLMs are unable to accurately compute word meanings if the noise introduces completely different subwords, small subword fragments, or a large number of additional subwords, particularly when they are inserted within other subwords.

Cite

CITATION STYLE

APA

Li, X., Liu, M., & Gao, S. (2023). Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise? In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 165–173). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.starsem-1.15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free