Unlike the grid-paced RGB images, network compression, i.e.pruning and quantization, for the irregular and sparse 3D point cloud face more challenges. Traditional quantization ignores the unbalanced semantic distribution in 3D point cloud. In this work, we propose a semantic-guided adaptive quantization framework for 3D point cloud. Different from traditional quantization methods that adopt a static and uniform quantization scheme, our proposed framework can adaptively locate the semantic-rich foreground points in the feature maps to allocate a higher bitwidth for these "important"points. Since the foreground points are in a low proportion in the sparse 3D point cloud, such adaptive quantization can achieve higher accuracy than uniform compression under a similar compression rate. Furthermore, we adopt a block-wise fine-grained compression scheme in the proposed framework to fit the larger dynamic range in the point cloud. Moreover, a 3D point cloud based software and hardware co-evaluation process is proposed to evaluate the effectiveness of the proposed adaptive quantization in actual hardware devices. Based on the nuScenes dataset, we achieve 12.52% precision improvement under average 2-bit quantization. Compared with 8-bit quantization, we can achieve 3.11× energy efficiency based on co-evaluation results.
CITATION STYLE
Feng, X., Tang, C., Zhang, Z., Sun, W., & Liu, Y. (2023). Semantic Guided Fine-Grained Point Cloud Quantization Framework for 3D Object Detection. In Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC (pp. 390–395). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3566097.3567874
Mendeley helps you to discover research relevant for your work.