Precise localization and pose estimation in indoor environments are commonly employed in a wide range of applications, including robotics, augmented reality, and navigation and positioning services. Such applications can be solved via visual-based localization using a pre-built 3D model. The increase in searching space associated with large scenes can be overcome by retrieving images in advance and subsequently estimating the pose. The majority of current deep learning-based image retrieval methods require labeled data, which increase data annotation costs and complicate the acquisition of data. In this paper, we propose an unsupervised hierarchical indoor localization framework that integrates an unsupervised network variational autoencoder (VAE) with a visualbased Structure-from-Motion (SfM) approach in order to extract global and local features. During the localization process, global features are applied for the image retrieval at the level of the scene map in order to obtain candidate images, and are subsequently used to estimate the pose from 2D-3D matches between query and candidate images. RGB images only are used as the input of the proposed localization system, which is both convenient and challenging. Experimental results reveal that the proposed method can localize images within 0.16 m and 4? in the 7-Scenes data sets and 32.8% within 5 m and 20? in the Baidu data set. Furthermore, our proposed method achieves a higher precision compared to advanced methods.
CITATION STYLE
Jiang, J., Zou, Y., Chen, L., & Fang, Y. (2021). A visual and vae based hierarchical indoor localization method. Sensors, 21(10). https://doi.org/10.3390/s21103406
Mendeley helps you to discover research relevant for your work.