Vehicle localization using 3D building models and point cloud matching

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Detecting buildings in the surroundings of an urban vehicle and matching them to building models available on map services is an emerging trend in robotics localization for urban vehicles. In this paper, we present a novel technique, which improves a previous work by detecting building façade, their positions, and finding the correspondences with their 3D models, available in Open-StreetMap. The proposed technique uses segmented point clouds produced using stereo images, processed by a convolutional neural network. The point clouds of the façades are then matched against a reference point cloud, produced extruding the buildings’ outlines, which are available on OpenStreetMap (OSM). In order to produce a lane-level localization of the vehicle, the resulting information is then fed into our probabilistic framework, called Road Layout Estimation (RLE). We prove the effectiveness of this proposal, testing it on sequences from the well-known KITTI dataset and comparing the results concerning a basic RLE version without the proposed pipeline.

Cite

CITATION STYLE

APA

Ballardini, A. L., Fontana, S., Cattaneo, D., Matteucci, M., & Sorrenti, D. G. (2021). Vehicle localization using 3D building models and point cloud matching. Sensors, 21(16). https://doi.org/10.3390/s21165356

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free