IntelliNavi : Navigation for blind based on kinect and machine learning

22Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a wearable navigation assistive system for the blind and the visually impaired built with off-the-shelf technology. Microsoft Kinect’s on board depth sensor is used to extract Red, Green, Blue and Depth (RGB-D) data of the indoor environment. Speeded-Up Robust Features (SURF) and Bag-of-Visual-Words (BOVW) model is used to extract features and reduce generic indoor object detection into a machine learning problem. A Support Vector Machine classifier is used to classify scene objects and obstacles to issue critical real-time information to the user through an external aid (earphone) for safe navigation. We performed a user-study with blind-fold users to measure the efficiency of the overall framework.

Cite

CITATION STYLE

APA

Bhowmick, A., Prakash, S., Bhagat, R., Prasad, V., & Hazarika, S. M. (2014). IntelliNavi : Navigation for blind based on kinect and machine learning. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8875, 172–183. https://doi.org/10.1007/978-3-319-13365-2_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free