A depth-data based obstacle detection and avoidance application for VI users to assist them in navigating independently in previously unmapped indoors environments is presented. The application is being developed for the recently introduced Google Project Tango Tablet Development Kit equipped with a powerful processor (NVIDIA Tegra K1 with 192 CUDA cores) as well as various sensors which allow it track its motion and orientation in 3D space in realtime. Depth data for the area in front of the users, obtained using the tablet’s inbuilt infrared-based depth sensor, is analyzed to detect obstacles and audio-based navigation instructions are provided accordingly. A visual display option is also offered for users with low vision. The aim is to develop a real-time, affordable, aesthetically acceptable, mobile assistive stand-alone application on a cuttingedge device, adopting a user-centered approach, which allows VI users to micronavigate autonomously in possibly unfamiliar indoor surroundings.
CITATION STYLE
Jafri, R., & Khan, M. M. (2016). Obstacle detection and avoidance for the visually impaired in indoors environments using Google’s project tango device. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9759, pp. 179–185). Springer Verlag. https://doi.org/10.1007/978-3-319-41267-2_24
Mendeley helps you to discover research relevant for your work.