Consistent semantic annotation of outdoor datasets via 2d/3d label transfer

8Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

The advance of scene understanding methods based on machine learning relies on the availability of large ground truth datasets, which are essential for their training and evaluation. Construction of such datasets with imagery from real sensor data however typically requires much manual annotation of semantic regions in the data, delivered by substantial human labour. To speed up this process, we propose a framework for semantic annotation of scenes captured by moving camera(s), e.g., mounted on a vehicle or robot. It makes use of an available 3D model of the traversed scene to project segmented 3D objects into each camera frame to obtain an initial annotation of the associated 2D image, which is followed by manual refinement by the user. The refined annotation can be transferred to the next consecutive frame using optical flow estimation. We have evaluated the efficiency of the proposed framework during the production of a labelled outdoor dataset. The analysis of annotation times shows that up to 43% less effort is required on average, and the consistency of the labelling is also improved.

Cite

CITATION STYLE

APA

Tylecek, R., & Fisher, R. B. (2018). Consistent semantic annotation of outdoor datasets via 2d/3d label transfer. Sensors (Switzerland), 18(7). https://doi.org/10.3390/s18072249

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free