Deep Learning Architectures for Navigation Using Forward Looking Sonar Images

13Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper investigates the use of supervised Deep Learning (DL) networks to process sonar images for underwater navigation. State-of-the-art DL techniques for micro-navigation using sequences of optical images have been adapted to work with sonar images. Specifically, the DL networks estimate the Forward-Looking Sonar (FLS) motion in three degrees of freedom corresponding to x-and y-translation and rotation around z-axis. The state-of-the-art DL architectures and a proposed new architecture are investigated for motion estimation. They are trained using images generated by a FLS simulator. The data sets are made using pairs of consecutive images associated with labels that represent the motion of the sonar platform between images. The results show the effectiveness of using the DL architectures, which can provide millimeter accuracy for translation motion and below 0.1° for rotation motion between two consecutive sonar images. Examples of trajectory estimation and mosaic building using simulated and real sonar images are also presented.

Cite

CITATION STYLE

APA

Almanza-Medina, J. E., Henson, B., & Zakharov, Y. V. (2021). Deep Learning Architectures for Navigation Using Forward Looking Sonar Images. IEEE Access, 9, 33880–33896. https://doi.org/10.1109/ACCESS.2021.3061440

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free