Context vector-based visual mapless navigation in indoor using hierarchical semantic information and meta-learning

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Visual mapless navigation (VMN), modeling a direct mapping between sensory inputs and agent actions, aims to navigate from a stochastic origin location to a prescribed goal in an unseen scene. A fundamental yet challenging issue in visual mapless navigation is generalizing to a new scene. Furthermore, it is of pivotal concern to design a method to make effective policy learning. To address these issues, we introduce a novel visual mapless navigation model, which integrates hierarchical semantic information represented by context vector with meta-learning to improve the generalization performance gap between known and unknown environments. Extensive experimental results on AI2-THOR benchmark dataset demonstrate that our model significantly outperforms the state-of-the-art model by 15.79 % for the SPL and by 23.83 % for the success rate. In addition, the exploration rate experiment shows that our model can effectively improve the invalid exploration behavior of the agent and accelerate the convergence speed of the model. Our implementation code and data can be viewed on https://github.com/zhiyu-tech/WHU-CVVMN.

Cite

CITATION STYLE

APA

Li, F., Guo, C., Zhang, H., & Luo, B. (2023). Context vector-based visual mapless navigation in indoor using hierarchical semantic information and meta-learning. Complex and Intelligent Systems, 9(2), 2031–2041. https://doi.org/10.1007/s40747-022-00902-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free