Scene flow estimation from 3D point clouds based on dual-branch implicit neural representations

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Recently, online optimisation-based scene flow estimation has attracted significant attention due to its strong domain adaptivity. Although online optimisation-based methods have made significant advances, the performance is far from satisfactory as only flow priors are considered, neglecting scene priors that are crucial for the representations of dynamic scenes. To address this problem, the authors introduce a dual-branch MLP-based architecture to encode implicit scene representations from a source 3D point cloud, which can additionally synthesise a target 3D point cloud. Thus, the mapping function between the source and synthesised target 3D point clouds is established as an extra implicit regulariser to capture scene priors. Moreover, their model infers both flow and scene priors in a stronger bidirectional manner. It can effectively establish spatiotemporal constraints among the synthesised, source, and target 3D point clouds. Experiments on four challenging datasets, including KITTI scene flow, FlyingThings3D, Argoverse, and nuScenes, show that our method can achieve potential and comparable results, proving its effectiveness and generality.

Cite

CITATION STYLE

APA

Zhai, M., Ni, K., Xie, J., & Gao, H. (2024). Scene flow estimation from 3D point clouds based on dual-branch implicit neural representations. IET Computer Vision, 18(2), 210–223. https://doi.org/10.1049/cvi2.12237

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free