Smartphone-based indoor navigation services are desperately needed in indoor environments. However, the adoption of them has been relatively slow, due to the lack of fine-grained and up-to-date indoor maps, or the potentially high deployment and maintenance cost of infrastructure-based indoor localization solutions. This work proposes ViNav, a scalable and cost-efficient system that implements indoor mapping, localization, and navigation based on visual and inertial sensor data collected from smartphones. ViNav applies structure-from-motion (SfM) techniques to reconstruct 3D models of indoor environments from crowdsourced images, locates points of interest (POI) in 3D models, and compiles navigation meshes for path finding. ViNav implements image-based localization that identifies users' positions and facing directions, and leverages this feature to calibrate dead-reckoning-based user trajectories and sensor fingerprints collected along the trajectories. The calibrated information is utilized for building more informative and accurate indoor maps, and lowering the response delay of localization requests. According to our experimental results in a university building and a supermarket, the system works properly and our indoor localization achieves competitive performance compared with traditional approaches: in a supermarket, ViNav locates users within 2 seconds, with a distance error less than 1 meter and a facing direction error less than 6 degrees.
CITATION STYLE
Dong, J., Noreikis, M., Xiao, Y., & Ylä-Jääski, A. (2019). ViNav: A Vision-Based Indoor Navigation System for Smartphones. IEEE Transactions on Mobile Computing, 18(6), 1461–1475. https://doi.org/10.1109/TMC.2018.2857772
Mendeley helps you to discover research relevant for your work.