Self-localization from a 360-Degree Camera Based on the Deep Neural Network

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This research aimed to develop a method that can be used for both the self-localization and correction of dead reckoning, from photographed images. Therefore, this research applied two methods to estimate position from the surrounding environment and position from the lengths between the own position and the targets. Convolutional neural network (CNN) and convolutional long short-term memory (CLSTM) were used as a method of self-localization. Panorama images and general images were used as input data. As a result, the method that uses “CNN with the pooling layer partially eliminated and a panorama image for input, calculates the intersection of a circle from the lengths between the own position and the targets, adopts three points with the closest intersection, and do not estimate own position if the closest intersection has a large error” was the most accurate. The total accuracy was 0.217 [m] for the x-coordinate and y-coordinate. As the room measured about 12 [m] by 12 [m] in size along with only about 3,000 training data, the error was considered to be small.

Cite

CITATION STYLE

APA

Hashimoto, S., & Namihira, K. (2020). Self-localization from a 360-Degree Camera Based on the Deep Neural Network. In Advances in Intelligent Systems and Computing (Vol. 943, pp. 145–158). Springer Verlag. https://doi.org/10.1007/978-3-030-17795-9_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free