Vision-Based 2D Navigation of Unmanned Aerial Vehicles in Riverine Environments with Imitation Learning

13Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

There have been many researchers studying how to enable unmanned aerial vehicles (UAVs) to navigate in complex and natural environments autonomously. In this paper, we develop an imitation learning framework and use it to train navigation policies for the UAV flying inside complex and GPS-denied riverine environments. The UAV relies on a forward-pointing camera to perform reactive maneuvers and navigate itself in 2D space by adapting the heading. We compare the performance of a linear regression-based controller, an end-to-end neural network controller and a variational autoencoder (VAE)-based controller trained with data aggregation method in the simulation environments. The results show that the VAE-based controller outperforms the other two controllers in both training and testing environments and is able to navigate the UAV with a longer traveling distance and a lower intervention rate from the pilots.

Cite

CITATION STYLE

APA

Wei, P., Liang, R., Michelmore, A., & Kong, Z. (2022). Vision-Based 2D Navigation of Unmanned Aerial Vehicles in Riverine Environments with Imitation Learning. Journal of Intelligent and Robotic Systems: Theory and Applications, 104(3). https://doi.org/10.1007/s10846-022-01593-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free