Autonomous mapping and navigation through utilization of edge-based optical flow and time-to-collision

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a cost-effective approach to map and navigate an area with only the means of a single, low-resolution camera on a “smart robot,” avoiding the cost and unreliability of radar/sonar systems. Implementation is divided into three main parts: object detection, autonomous movement, and mapping by spiraling inwards and using A* Pathfinding algorithm. Object detection is obtained by editing Horn–Schunck’s optical flow algorithm to track pixel brightness factors to subsequent frames, producing outward vectors. These vectors are then focused on the objects using Sobel edge detection. Autonomous movement is achieved by finding the focus of expansion from those vectors and calculating time to collisions, which are then used to maneuver. Algorithms are programmed in MATLAB and JAVA, and implemented with LEGO Mindstorm NXT 2.0 robot for real-time testing with a low-resolution video camera. Through numerous trials and diversity of the situations, validity of results is ensured to autonomously navigate and map a room using solely optical inputs.

Cite

CITATION STYLE

APA

Krishnan, M., Wu, M., Kang, Y. H., & Lee, S. (2015). Autonomous mapping and navigation through utilization of edge-based optical flow and time-to-collision. Lecture Notes in Electrical Engineering, 313, 149–157. https://doi.org/10.1007/978-3-319-06773-5_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free