Visual Navigation Based on Language Assistance and Memory

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In order to solve outdoor mobile robots' dependence on geographic information systems, and to realize automatic navigation in the face of complex and changeable scenes, we propose a method that selects landmark and adds prompt guidance so that the mobile robot can navigate relying on visual-language and memory. Visual-language can guide the direction of the mobile robot's movement, obeying the annotation of people and according to its memory of the scene, which refers to the strategy of selecting passed-by landmarks for the route and remembering the scene features. When passing it, the agent can ascertain the position and match it to carry out the action. Experiments showed that our proposed method can achieve the purpose of independent navigation without GIS, and is superior to existing methods.

Cite

CITATION STYLE

APA

Xiao, S., & Fu, W. (2023). Visual Navigation Based on Language Assistance and Memory. IEEE Access, 11, 13996–14005. https://doi.org/10.1109/ACCESS.2023.3239837

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free