With the development of computer vision and mobile computing, assistive navigation for people with visual impairment arouses the attention of research communities. As two key challenges of assistive navigation, 'Where am I?' and 'What are the surroundings?' are still to be resolved by taking advantage of visual information. In this paper, we leverage the prevailing compact network as the backbone to build a unified network featuring two branches that implement scene description and scene recognition separately. Based on the unified network, the proposed pipeline performs scene recognition and visual localization simultaneously in the scenario of assistive navigation. The visual localization pipeline involves image retrieval and sequence matching. In the experiments, different configurations of the proposed pipeline are tested on public datasets to search for the optimal parameters. Moreover, on the real-world datasets captured by the wearable assistive device, the proposed assistive navigation pipeline is proved to achieve satisfactory performance. On the challenging dataset, the top-5 precision of scene recognition is more than 80%, and the visual localization precision is over 60% under a recall of 60%. The related codes and datasets are open-source online at https://github.com/chengricky/ScenePlaceRecognition.
CITATION STYLE
Cheng, R., Wang, K., Bai, J., & Xu, Z. (2020). Unifying Visual Localization and Scene Recognition for People with Visual Impairment. IEEE Access, 8, 64284–64296. https://doi.org/10.1109/ACCESS.2020.2984718
Mendeley helps you to discover research relevant for your work.