Wearable auxiliary devices for visually impaired people are highly attractive research top-ics. Although many proposed wearable navigation devices can assist visually impaired people in obstacle avoidance and navigation, these devices cannot feedback detailed information about the obstacles or help the visually impaired understand the environment. In this paper, we proposed a wearable navigation device for the visually impaired by integrating the semantic visual SLAM (Sim-ultaneous Localization And Mapping) and the newly launched powerful mobile computing plat-form. This system uses an Image-Depth (RGB-D) camera based on structured light as the sensor, as the control center. We also focused on the technology that combines SLAM technology with the extraction of semantic information from the environment. It ensures that the computing platform understands the surrounding environment in real-time and can feed it back to the visually impaired in the form of voice broadcast. Finally, we tested the performance of the proposed semantic visual SLAM system on this device. The results indicate that the system can run in real-time on a wearable navigation device with sufficient accuracy.
CITATION STYLE
Chen, Z., Liu, X., Kojima, M., Huang, Q., & Arai, T. (2021). A wearable navigation device for visually impaired people based on the real-time semantic visual slam system. Sensors, 21(4), 1–14. https://doi.org/10.3390/s21041536
Mendeley helps you to discover research relevant for your work.