Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models

20Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

Reasoning about time is of fundamental importance. Many facts are time-dependent. For example, athletes change teams from time to time, and different government officials are elected periodically. Previous time-dependent question answering (QA) datasets tend to be biased in either their coverage of time spans or question types. In this paper, we introduce a comprehensive probing dataset TEMPREASON to evaluate the temporal reasoning capability of large language models. Our dataset includes questions of three temporal reasoning levels. In addition, we also propose a novel learning framework to improve the temporal reasoning capability of large language models, based on temporal span extraction and time-sensitive reinforcement learning. We conducted experiments in closed book QA, open book QA, and reasoning QA settings and demonstrated the effectiveness of our approach.

Cite

CITATION STYLE

APA

Tan, Q., Ng, H. T., & Bing, L. (2023). Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 14820–14835). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.828

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free