In this paper, we propose a goal-directed navigation system consisting of two planning strategies that both rely on vision but work on different scales. The first one works on a global scale and is responsible for generating spatial trajectories leading to the neighboring area of the target. It is a biologically inspired neural planning and navigation model involving learned representations of place and head-direction (HD) cells, where a planning network is trained to predict the neural activities of these cell representations given selected action signals. Recursive prediction and optimization of the continuous action signals generates goal-directed activation sequences, in which states and action spaces are represented by the population of place-, HD- and motor neuron activities. To compensate the remaining error from this look-ahead model-based planning, a second planning strategy relies on visual recognition and performs target-driven reaching on a local scale so that the robot can reach the target with a finer accuracy. Experimental results show that through combining these two planning strategies the robot can precisely navigate to a distant target.
CITATION STYLE
Zhou, X., Weber, C., Bothe, C., & Wermter, S. (2018). A hybrid planning strategy through learning from vision for target-directed navigation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11140 LNCS, pp. 304–311). Springer Verlag. https://doi.org/10.1007/978-3-030-01421-6_30
Mendeley helps you to discover research relevant for your work.