This paper presents a wearable navigation assistive system for the blind and the visually impaired built with off-the-shelf technology. Microsoft Kinect’s on board depth sensor is used to extract Red, Green, Blue and Depth (RGB-D) data of the indoor environment. Speeded-Up Robust Features (SURF) and Bag-of-Visual-Words (BOVW) model is used to extract features and reduce generic indoor object detection into a machine learning problem. A Support Vector Machine classifier is used to classify scene objects and obstacles to issue critical real-time information to the user through an external aid (earphone) for safe navigation. We performed a user-study with blind-fold users to measure the efficiency of the overall framework.
CITATION STYLE
Bhowmick, A., Prakash, S., Bhagat, R., Prasad, V., & Hazarika, S. M. (2014). IntelliNavi : Navigation for blind based on kinect and machine learning. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8875, 172–183. https://doi.org/10.1007/978-3-319-13365-2_16
Mendeley helps you to discover research relevant for your work.