Blindness affects millions of people worldwide, leading to difficulties in daily travel and a loss of independence due to a lack of spatial information. This article proposes a new navigation aid to help people with severe blindness reach their destination. Blind people are guided by a short 3D spatialised sound that indicates the target point to follow. This sound is combined with other sonified information on potential obstacles in the vicinity. The proposed system is based on inertial sensors, GPS data, and the cartographic knowledge of pedestrian paths to define the trajectory. In addition, visual clues are used to refine the trajectory with ground floor information and obstacle information using a camera to provide 3D spatial information. The proposed method is based on a deep learning approach. The different neural networks used in this approach are evaluated on datasets that regroup navigations from pedestrians’ point-of-view. This method achieves low latency and real-time processing without relying on remote connections, instead using a low-power embedded GPU target and a multithreaded approach for video processing, sound generation, and acquisition. This system could significantly improve the quality of life and autonomy of blind people, allowing them to reliably and efficiently navigate in their environment.
CITATION STYLE
Scalvini, F., Bordeau, C., Ambard, M., Migniot, C., & Dubois, J. (2024). Outdoor Navigation Assistive System Based on Robust and Real-Time Visual–Auditory Substitution Approach. Sensors, 24(1). https://doi.org/10.3390/s24010166
Mendeley helps you to discover research relevant for your work.