Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning

21Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

Long-horizon task planning is essential for the development of intelligent assistive and service robots. In this work, we investigate the applicability of a smaller class of large language models (LLMs), specifically GPT-2, in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially. Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans, thereby learning to reason over long-horizon tasks, as encountered in the ALFRED benchmark. We compare our approach with classical planning and baseline methods to examine the applicability and generalizability of LLM-based planners. Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics.

Cite

CITATION STYLE

APA

Chalvatzaki, G., Younes, A., Nandha, D., Le, A. T., Ribeiro, L. F. R., & Gurevych, I. (2023). Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning. Frontiers in Robotics and AI, 10. https://doi.org/10.3389/frobt.2023.1221739

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free