Label-Guided Auxiliary Training Improves 3D Object Detector

3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Detecting 3D objects from point clouds is a practical yet challenging task that has attracted increasing attention recently. In this paper, we propose a Label-Guided auxiliary training method for 3D object detection (LG3D), which serves as an auxiliary network to enhance the feature learning of existing 3D object detectors. Specifically, we propose two novel modules: a Label-Annotation-Inducer that maps annotations and point clouds in bounding boxes to task-specific representations and a Label-Knowledge-Mapper that assists the original features to obtain detection-critical representations. The proposed auxiliary network is discarded in inference and thus has no extra computational cost at test time. We conduct extensive experiments on both indoor and outdoor datasets to verify the effectiveness of our approach. For example, our proposed LG3D improves VoteNet by 2.5% and 3.1% mAP on the SUN RGB-D and ScanNetV2 datasets, respectively. The code is available at https://github.com/FabienCode/LG3D.

Cite

CITATION STYLE

APA

Huang, Y., Liu, X., Zhu, Y., Xu, Z., Shen, C., Che, Z., … Tang, J. (2022). Label-Guided Auxiliary Training Improves 3D Object Detector. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13669 LNCS, pp. 684–700). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20077-9_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free