Abstract
Text-based games (TGs) are exciting testbeds for developing deep reinforcement learning techniques due to their partially observed environments and large action spaces. In these games, the agent learns to explore the environment via natural language interactions with the game simulator. A fundamental challenge in TGs is the efficient exploration of the large action space when the agent has not yet acquired enough knowledge about the environment. We propose COMMEXPL, an exploration technique that injects external commonsense knowledge, via a pretrained language model (LM), into the agent during training when the agent is the most uncertain about its next action. Our method exhibits improvement on the collected game scores during the training in four out of nine games from Jericho. Additionally, the produced trajectory of actions exhibit lower perplexity, when tested with a pretrained LM, indicating better closeness to human language.
Cite
CITATION STYLE
Ryu, D. K., Shareghi, E., Fang, M., Xu, Y., Pan, S., & Haffari, G. (2022). Fire Burns, Sword Cuts: Commonsense Inductive Bias for Exploration in Text-based Games. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 515–522). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.56
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.