LEARNING the 3D POSE of VEHICLES from 2D VEHICLE PATCHES

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Estimating vehicle poses is crucial for generating precise movement trajectories from (surveillance) camera data. Additionally for real time applications this task has to be solved in an efficient way. In this paper we introduce a deep convolutional neural network for pose estimation of vehicles from image patches. For a given 2D image patch our approach estimates the 2D coordinates of the image representing the exact center ground point (cx, cy) and the orientation of the vehicle - represented by the elevation angle (e) of the camera with respect to the vehicle's center ground point and the azimuth rotation (a) of the vehicle with respect to the camera. To train a accurate model a large and diverse training dataset is needed. Collecting and labeling such large amount of data is very time consuming and expensive. Due to the lack of a sufficient amount of training data we show furthermore, that also rendered 3D vehicle models with artificial generated textures are nearly adequate for training.

Cite

CITATION STYLE

APA

Koetsier, C., Peters, T., & Sester, M. (2020). LEARNING the 3D POSE of VEHICLES from 2D VEHICLE PATCHES. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives (Vol. 43, pp. 683–688). International Society for Photogrammetry and Remote Sensing. https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-683-2020

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free