We investigate the phenomenon of an LLM's untruthful response using a large set of 220 handcrafted linguistic features. We focus on GPT-3 models and find that the linguistic profiles of responses are similar across model sizes. That is, how varying-sized LLMs respond to given prompts stays similar on the linguistic properties level. We expand upon this finding by training support vector machines that rely only upon the stylistic components of model responses to classify the truthfulness of statements. Though the dataset size limits our current findings, we show the possibility that truthfulness detection is possible without evaluating the content itself. But at the same time, the limited scope of our experiments must be taken into account in interpreting the results.
CITATION STYLE
Lee, B. W., Arockiaraj, B. F., & Jin, H. (2023). Linguistic Properties of Truthful Response. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 135–140). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.trustnlp-1.12
Mendeley helps you to discover research relevant for your work.