Use of inertial sensors to support video tracking

28Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the biggest obstacles to building effective augmented reality (AR) systems is the lack of accurate sensors that report the location of the user in an environment during arbitrary long periods of movements. In this paper, we present an effective hybrid approach that integrates inertial and vision-based technologies. This work is motivated by the need to explicitly take into account the relatively poor accuracy of inertial sensors and thus to define an efficient strategy for the collaborative process between the vision-based system and the sensor. The contributions of this papers are threefold: (i) our collaborative strategy fully integrates the sensitivity error of the sensor: the sensitivity is practically studied and is propagated into the collaborative process, especially in the matching stage (ii) we propose an original online synchronization process between the vision-based system and the sensor. This process allows us to use the sensor only when needed, (iii) an effective AR system using this hybrid tracking is demonstrated through an e-commerce application in unprepared environments. Copyright © 2007 John Wiley & Sons, Ltd.

Cite

CITATION STYLE

APA

Aron, M., Simon, G., & Berger, M. O. (2007). Use of inertial sensors to support video tracking. In Computer Animation and Virtual Worlds (Vol. 18, pp. 57–68). https://doi.org/10.1002/cav.161

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free