Real-time on-board image processing using an embedded GPU for monocular vision-based navigation

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work we present a new image-based navigation method for guiding a mobile robot equipped only with a monocular camera through a naturally delimited path. The method is based on segmenting the image and classifying each super-pixel to infer a contour of navigable space. While image segmentation is a costly computation, in this case we use a low-power embedded GPU to obtain the necessary framerate in order to achieve a reactive control for the robot. Starting from an existing GPU implementation of the quick-shift segmentation algorithm, we introduce some simple optimizations which result in a speedup which makes real-time processing on board a mobile robot possible. Performed experiments using both a dataset of images and an online on-board execution of the system in an outdoor environment demonstrate the validity of this approach. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Nitsche, M. A., & De Cristóforis, P. (2012). Real-time on-board image processing using an embedded GPU for monocular vision-based navigation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7441 LNCS, pp. 591–598). https://doi.org/10.1007/978-3-642-33275-3_73

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free