Bootstrapping computer vision and sensor fusion for absolute and relative vehicle positioning

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the migration into automated driving for various classes of vehicles, affordable self-positioning upto at least cm accuracy is a goal to be achieved. Commonly used techniques such as GPS are either not accurate enough in their basic variant or accurate but too expensive. In addition, sufficient GPS coverage is in several cases not guaranteed. In this paper we propose positioning of a vehicle based on fusion of several sensor inputs.We consider inputs from improved GPS (with internet based corrections), inertia sensors and vehicle sensors fused with computer vision based positioning. For vision-based positioning, cameras are used for feature-based visual odometry to do relative positioning and beacon-based for absolute positioning. Visual features are brought into a dynamic map which allows sharing information among vehicles and allows us to deal with less robust feautures. This paper does not present final results, yet it is intended to share ideas that are currently being investigated and implemented.

Cite

CITATION STYLE

APA

Janssen, K., Rademakers, E., Boulkroune, B., El Ghouti, N., & Kleihorst, R. (2015). Bootstrapping computer vision and sensor fusion for absolute and relative vehicle positioning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9386, pp. 241–248). Springer Verlag. https://doi.org/10.1007/978-3-319-25903-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free