Vision based reconstruction multi-clouds of scale invariant feature transform features for indoor navigation

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Problem statement: Navigation for visually impaired people needs to exploit more approaches for solving problems it has, especially in image based methods navigation. Approach: This study introduces a new approach of an electronic cane for navigation through the environment. By forming multi clouds of SIFT features for the scene objects in the environment using some considerations. Results: The system gives an efficient localization within the weighted topological graph. Instead of building a metric (3D) model of the environment, it helps the blind person to navigate more confidently. The work efforts towards conceptualizing environment on the basis of the human compatible representation so formed. Such representation and the resulting conceptualization would be useful for enabling blind persons to be cognizant of their surroundings. The identification of different scenes to the blind person has done by clouds of three or two objects. These clouds grouped the stored objects into meaningful groups used in localization of a cane with single web camera as an external sensor. Conclusion: The approach is useful to divide the space environment into meaning partitions and helps to detect sites and objects needed from the blind person in very sufficient way with in the map. © 2009 Science Publications.

Cite

CITATION STYLE

APA

Ali, A. M., & Nordin, M. J. (2009). Vision based reconstruction multi-clouds of scale invariant feature transform features for indoor navigation. Journal of Computer Science, 5(12), 948–955. https://doi.org/10.3844/jcssp.2009.948.955

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free