Simultaneous localization and mapping for event-based vision systems

68Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel method for vision based simultaneous localization and mapping (vSLAM) using a biologically inspired vision sensor that mimics the human retina. The sensor consists of a 128x128 array of asynchronously operating pixels, which independently emit events upon a temporal illumination change. Such a representation generates small amounts of data with high temporal precision; however, most classic computer vision algorithms need to be reworked as they require full RGB(-D) images at fixed frame rates. Our presented vSLAM algorithm operates on individual pixel events and generates high-quality 2D environmental maps with precise robot localizations. We evaluate our method with a state-of-the-art marker-based external tracking system and demonstrate real-time performance on standard computing hardware. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Weikersdorfer, D., Hoffmann, R., & Conradt, J. (2013). Simultaneous localization and mapping for event-based vision systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7963 LNCS, pp. 133–142). https://doi.org/10.1007/978-3-642-39402-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free