Design hints for efficient robotic vision - Lessons learned from a robotic platform

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Interest in autonomous vehicles has steadily increased in recent years. A number of tasks, like lane tracking, semaphore detection and decoding, are key features for a self-driving robot. This paper presents a path detection and tracking algorithm using the Inverse Perspective Mapping and Hough Transform methods compounded with real-time vision techniques and a semaphore recognition system based on color segmentation. An evaluation of the proposed algorithm is performed and a comparison between the results using real-time techniques is also presented. The suggested architecture has been put to test on autonomous driving robot who competed in the Portuguese autonomous vehicle competition called “Festival Nacional de Robótica”. The overall process of the lane tracking algorithm, takes about 1.4 ms per image, almost 60 times faster than the first algorithm tested and a good accuracy, showing a translation error below 0.03 m and a rotation error below 5°. Regarding the real-time semaphore recognition, it takes about 0.35 ms to detect a semaphore and has achieved a perfect score in the laboratory tests performed.

Cite

CITATION STYLE

APA

Costa, V., Cebola, P., Sousa, A., & Reis, A. (2018). Design hints for efficient robotic vision - Lessons learned from a robotic platform. Lecture Notes in Computational Vision and Biomechanics, 27, 515–524. https://doi.org/10.1007/978-3-319-68195-5_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free