FANTOM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

15Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity. We introduce FANTOM, a new benchmark designed to stress-test ToM within information-asymmetric conversational contexts via question answering. Our benchmark draws upon important theoretical requisites from psychology and necessary empirical considerations when evaluating large language models (LLMs). In particular, we formulate multiple types of questions that demand the same underlying reasoning to identify illusory or false sense of ToM capabilities in LLMs. We show that FANTOM is challenging for state-of-the-art LLMs, which perform significantly worse than humans even with chain-of-thought reasoning or fine-tuning.

Cite

CITATION STYLE

APA

Kim, H., Sclar, M., Zhou, X., Le Bras, R., Kim, G., Choi, Y., & Sap, M. (2023). FANTOM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 14397–14413). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.890

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free