Pedestrian localization within large-scale multi-building/multi-floor indoor environments remains a challenging task. Fingerprinting-based approaches are particularly suited for such large-scale deployments due to their low requirements of hardware installments. Recently, the fingerprinting problem has been addressed by deep learning. Existing models are mostly task specific by providing floor classification or position estimation within a small area. A strategy to support localization within large-scale environments is to sequentially apply hierarchical models. This has several drawbacks including missing scalability and increased deployment complexity on smartphones. We propose a unifying approach based on training a single neural network that classifies the building/floor and predicts the position in a single forward-pass of the network. Our model classifies a grid cell and performs within grid cell regression, which solves the performance degradation of applying regression within large areas. To reduce the error in case of misclassified grid cells, we propose a novel technique called multi cell encoding learning (multi-CEL), where a model simultaneously learns several redundant position representations within an overlapping grid cell encoding. For three public WLAN fingerprinting datasets, we demonstrate that multi-CEL surpasses existing state-of-the-art multi-task learning neural networks and even outperforms regression neural networks explicitly trained for 2D-positioning by up to 17%.
CITATION STYLE
Laska, M., & Blankenbach, J. (2022). Multi-Task Neural Network for Position Estimation in Large-Scale Indoor Environments. IEEE Access, 10, 26024–26032. https://doi.org/10.1109/ACCESS.2022.3156579
Mendeley helps you to discover research relevant for your work.