Language-based environment manipulation requires agents to manipulate the environment following natural language instructions, which is challenging due to the huge space of the environments. To address this challenge, various approaches have been proposed in recent work. Although these approaches work well for their intended environments, they are difficult to generalize across environments. In this work, we propose LEMON, a general framework for language-based environment manipulation tasks. Specifically, we first specify a task-agnostic approach for language-based environment manipulation tasks, which can deal with various environments using the same generative language model. Then we propose an execution-guided pre-training strategy to inject prior knowledge of environments to the language model with a pure synthetic pre-training corpus. Experimental results on tasks including ALCHEMY, SCENE, TANGRAMS, PROPARA and RECIPES demonstrate the effectiveness of LEMON: it achieves new state-of-the-art results on four of the tasks, and the execution-guided pre-training strategy brings remarkable improvements on all experimental tasks.
CITATION STYLE
Shi, Q., Liu, Q., Chen, B., Zhang, Y., Liu, T., & Lou, J. G. (2022). LEMON: Language-Based Environment Manipulation via Execution-Guided Pre-training. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 471–485). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.33
Mendeley helps you to discover research relevant for your work.