Fusion of inertial and visual information for indoor localisation

12Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Indoor localisation has attracted a lot of attention because of its importance for location-based services. A fusion algorithm (named as YELM-DS) based on extreme learning machine (ELM) and Dempster–Shafer (D–S) evidence theory is proposed. ELM learns the input data model composed of inertial and visual information and target output positions with high speed. During online phase, the final localisation result of a frame is decided by the trust degree obtained from D–S. Angle judgments are also introduced to decrease the big localisation errors of turning. Compared with the existing vision-only methods, the proposed method can both run in real time and achieve good localisation accuracy even in challenging scenarios.

Cite

CITATION STYLE

APA

Xu, Y., Yu, H., & Zhang, J. (2018). Fusion of inertial and visual information for indoor localisation. Electronics Letters, 54(13), 850–851. https://doi.org/10.1049/el.2018.0366

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free