An Artificial Neural SLAM Framework for Event-Based Vision

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The SLAM problem for autonomous robots can be greatly improved by using event-based cameras. Compared to others, event-based cameras consume very low power while providing great temporal resolution and dynamic range. In this study, we propose a convolutional neural SLAM framework based solely on the event data. Event-based cameras generate events for pixels whose brightness changes. Therefore, the event data is rich in motion and edge information. The purpose of the proposed framework is to make all estimations using encoded information in event data. The proposed solution is in the form of keyframe-based visual SLAM, consisting of three neural networks that can estimate the relative camera pose, log-depth and features for loop closure detection. In the study, network architectures and learning curves for the trained networks are presented and it is shown that networks can learn the problems successfully. The proposed method has been developed and tested on a new dataset generated by the CARLA simulator. It has been shown that the proposed method is a SLAM solution and it can keep global drift under control with loop closure estimations. Evaluation metrics for estimations, evaluation of the global model and an analysis of run-time performance are also presented.

Cite

CITATION STYLE

APA

Gelen, A. G., & Atasoy, A. (2023). An Artificial Neural SLAM Framework for Event-Based Vision. IEEE Access, 11, 58436–58450. https://doi.org/10.1109/ACCESS.2023.3282637

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free