Recent advancements in Large language models (LLMs) have enabled them to hold free form conversations over multiple turns, but they exhibit a tendency to make unfounded and incorrect statements, commonly known as hallucinations. In particular, LLMs hallucinate frequently when given invalid questions, i.e. ones with incorrect assumptions. The most common approach to evaluate LLMs on hallucinations is to test them on Question Answering (QA) test sets such as TruthfulQA. However, LLMs are increasingly pretrained on massive text corpora scraped from the Internet, which may inevitably expose these test sets to the model during training, leading eventually to an overestimation of model performances on these test sets. In this work, we present an alternative framework to address this risk and to foster further research towards making LLMs robust against invalid questions. We name our framework INVITE: a testbed of automatically generated INValId questions to evaluaTE large language models for hallucinations. In each instantiation, our framework is set up to create a fresh batch of invalid questions by distorting valid facts in which subjects or objects are replaced by similar entities. We evaluate several state of the art LLMs against a testset generated by our framework and highlight its capacity to trigger hallucinations in these models.
CITATION STYLE
Ramakrishna, A., Gupta, R., Lehmann, J., & Ziyadi, M. (2023). INVITE: a Testbed of Automatically Generated Invalid Questions to Evaluate Large Language Models for Hallucinations. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 5422–5429). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.360
Mendeley helps you to discover research relevant for your work.