CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental Learning

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Neural Radiance Fields have demonstrated impressive performance in novel view synthesis. However, NeRF and most of its variants still rely on traditional complex pipelines to provide extrinsic and intrinsic camera parameters, such as COLMAP. Recent works, like NeRFmm, BARF, and L2GNeRF, directly treat camera parameters as learnable and estimate them through differential volume rendering. However, these methods work for forward-looking scenes with slight motions and fail to tackle the rotation scenario in practice. To overcome this limitation, we propose a novel camera parameter free neural radiance field (CF-NeRF), which incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion. Given a sequence of images, CF-NeRF estimates camera parameters of images one by one and reconstructs the scene through initialization, implicit localization, and implicit optimization. To evaluate our method, we use a challenging real-world dataset, NeRFBuster, which provides 12 scenes under complex trajectories. Results demonstrate that CF-NeRF is robust to rotation and achieves state-of-the-art results without providing prior information and constraints.

Cite

CITATION STYLE

APA

Yan, Q., Wang, Q., Zhao, K., Chen, J., Li, B., Chu, X., & Deng, F. (2024). CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental Learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 6440–6448). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i6.28464

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free