CenterLoc3D: monocular 3D vehicle localization network for roadside surveillance cameras

4Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Monocular 3D vehicle localization is an important task for vehicle behaviour analysis, traffic flow parameter estimation and autonomous driving in Intelligent Transportation System (ITS) and Cooperative Vehicle Infrastructure System (CVIS), which is usually achieved by monocular 3D vehicle detection. However, monocular cameras cannot obtain depth information directly due to the inherent imaging mechanism, resulting in more challenging monocular 3D tasks. Currently, most of the monocular 3D vehicle detection methods still rely on 2D detectors and additional geometric constraint modules to recover 3D vehicle information, which reduces the efficiency. At the same time, most of the research is based on datasets of onboard scenes, instead of roadside perspective, which is limited in large-scale 3D perception. Therefore, we focus on 3D vehicle detection without 2D detectors in roadside scenes. We propose a 3D vehicle localization network CenterLoc3D for roadside monocular cameras, which directly predicts centroid and eight vertexes in image space, and the dimension of 3D bounding boxes without 2D detectors. To improve the precision of 3D vehicle localization, we propose a multi-scale weighted-fusion module and a loss with spatial constraints embedded in CenterLoc3D. Firstly, the transformation matrix between 2D image space and 3D world space is solved by camera calibration. Secondly, vehicle type, centroid, eight vertexes, and the dimension of 3D vehicle bounding boxes are obtained by CenterLoc3D. Finally, centroid in 3D world space can be obtained by camera calibration and CenterLoc3D for 3D vehicle localization. To the best of our knowledge, this is the first application of 3D vehicle localization for roadside monocular cameras. Hence, we also propose a benchmark for this application including a dataset (SVLD-3D), an annotation tool (LabelImg-3D), and evaluation metrics. Through experimental validation, the proposed method achieves high accuracy with AP3D of 51.30%, average 3D localization precision of 98%, average 3D dimension precision of 85% and real-time performance with FPS of 41.18.

Cite

CITATION STYLE

APA

Tang, X., Wang, W., Song, H., & Zhao, C. (2023). CenterLoc3D: monocular 3D vehicle localization network for roadside surveillance cameras. Complex and Intelligent Systems, 9(4), 4349–4368. https://doi.org/10.1007/s40747-022-00962-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free