Decoding Symbolism in Language Models

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work explores the feasibility of eliciting knowledge from language models (LMs) to decode symbolism, recognizing something (e.g., roses) as a stand-in for another (e.g., love). We present our evaluative framework, Symbolism Analysis (SymbA), which compares LMs (e.g., RoBERTa, GPT-J) on different types of symbolism and analyzes the outcomes along multiple metrics. Our findings suggest that conventional symbols are more reliably elicited from LMs while situated symbols are more challenging. Results also reveal the negative impact of the bias in pre-trained corpora. We further demonstrate that a simple re-ranking strategy can mitigate the bias and significantly improve model performances to be on par with human performances in some cases.

Cite

CITATION STYLE

APA

Guo, M., Hwa, R., & Kovashka, A. (2023). Decoding Symbolism in Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 3311–3324). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.186

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free