6DoF pose estimation from a monocular RGB image is a challenging but fundamental task. The methods based on unit direction vector-field representation and Hough voting strategy achieved state-ofthe- art performance. Nevertheless, they apply the smooth ℓ1 loss to learn the two elements of the unit vector separately, resulting in which is not taken into account that the prior distance between the pixel and the keypoint. While the positioning error is significantly affected by the prior distance. In this work, we propose a Prior Distance Augmented Loss (PDAL) to exploit the prior distance for more accurate vector-field representation. Furthermore, we propose a lightweight channel-level attention module for adaptive feature fusion. Embedding this Adaptive Fusion AttentionModule (AFAM) into the U-Net, we build an Attention Voting Network to further improve the performance of our method. We conduct extensive experiments to demonstrate the effectiveness and performance improvement of our methods on the LINEMOD, OCCLUSION and YCB-Video datasets. Our experiments show that the proposed methods bring significant performance gains and outperform state-of-the-art RGB-based methods without any post-refinement.
CITATION STYLE
He, Y., Li, J., Zhou, X., Chen, Z., & Liu, X. (2021). Attention voting network with prior distance augmented loss for 6DoF pose estimation. IEICE Transactions on Information and Systems, E104D(7), 1039–1048. https://doi.org/10.1587/transinf.2020EDP7235
Mendeley helps you to discover research relevant for your work.