Keep CALM and explore: Language models for action generation in text-based games

85Citations
Citations of this article
120Readers
Mendeley users who have this article in their library.

Abstract

Text-based games present a unique challenge for autonomous agents to operate in natural language and handle enormous action spaces. In this paper, we propose the Contextual Action Language Model (CALM) to generate a compact set of action candidates at each game state. Our key insight is to train language models on human gameplay, where people demonstrate linguistic priors and a general game sense for promising actions conditioned on game history. We combine CALM with a reinforcement learning agent which re-ranks the generated action candidates to maximize in-game rewards. We evaluate our approach using the Jericho benchmark (Hausknecht et al., 2019a), on games unseen by CALM during training. Our method obtains a 69% relative improvement in average game score over the previous state-of-the-art model. Surprisingly, on half of these games, CALM is competitive with or better than other models that have access to ground truth admissible actions.

Cite

CITATION STYLE

APA

Yao, S., Rao, R., Hausknecht, M., & Narasimhan, K. (2020). Keep CALM and explore: Language models for action generation in text-based games. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 8736–8754). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.704

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free