Where and How: Mitigating Confusion in Neural Radiance Fields from Sparse Inputs

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural Radiance Fields from Sparse inputs (NeRF-S) have shown great potential in synthesizing novel views with a limited number of observed viewpoints. However, due to the inherent limitations of sparse inputs and the gap between non-adjacent views, rendering results often suffer from over-fitting and foggy surfaces, a phenomenon we refer to as "CONFUSION"during volume rendering. In this paper, we analyze the root cause of this confusion and attribute it to two fundamental questions: "WHERE"and "HOW". To this end, we present a novel learning framework, WaH-NeRF, which effectively mitigates confusion by tackling the following challenges: (i) "WHERE"to Sample? in NeRF-S-we introduce a Deformable Sampling strategy and a Weight-based Mutual Information Loss to address sample-position confusion arising from the limited number of viewpoints; and (ii) "HOW"to Predict? in NeRF-S-we propose a Semi-Supervised NeRF learning Paradigm based on pose perturbation and a Pixel-Patch Correspondence Loss to alleviate prediction confusion caused by the disparity between training and testing viewpoints. By integrating our proposed modules and loss functions, WaH-NeRF outperforms previous methods under the NeRF-S setting. Code is available https://github.com/bbbbby-99/WaH-NeRF.

Cite

CITATION STYLE

APA

Bao, Y., Li, Y., Huo, J., Ding, T., Liang, X., Li, W., & Gao, Y. (2023). Where and How: Mitigating Confusion in Neural Radiance Fields from Sparse Inputs. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 2180–2188). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3613769

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free