Knowledge-Enhanced Scene Context Embedding for Object-Oriented Navigation of Autonomous Robots

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Object-oriented navigation in unknown environments with only vision as input has been a challenging task for autonomous robots. Introducing semantic knowledge into the model has been proved to be an effective means to improve the suboptimal performance and the generalization of existing end-to-end learning methods. In this paper, we improve object-oriented navigation by proposing a knowledge-enhanced scene context embedding method, which consists of a reasonable knowledge graph and a designed novel 6-D context vector. The developed knowledge graph (named MattKG) is derived from large-scale real-world scenes and contains object-level relationships that are expected to assist robots to understand the environment. The designed novel 6-D context vector replaces traditional pixel-level raw features by embedding observations as scene context. The experimental results on the public dataset AI2-THOR indicate that our method improves both the navigation success rate and efficiency compared with other state-of-the-art models. We also deploy the proposed method on a physical robot and apply it to the real-world environment.

Cite

CITATION STYLE

APA

Li, Y., Xiao, N., Huo, X., & Wu, X. (2022). Knowledge-Enhanced Scene Context Embedding for Object-Oriented Navigation of Autonomous Robots. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13455 LNAI, pp. 3–12). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-13844-7_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free