Abstract
In this paper, we address the problem of simultaneous relighting and novel view synthesis of a complex scene from multi-view images with a limited number of light sources. We propose an analysis-synthesis approach called Relit-NeuLF. Following the recent neural 4D light field network (NeuLF)[22], Relit-NeuLF first leverages a two-plane light field representation to parameterize each ray in a 4D coordinate system, enabling efficient learning and inference. Then, we recover the spatially-varying bidirectional reflectance distribution function (SVBRDF) of a 3D scene in a self-supervised manner. A DecomposeNet learns to map each ray to its SVBRDF components: albedo, normal, and roughness. Based on the decomposed BRDF components and conditioning light directions, a RenderNet learns to synthesize the color of the ray. To self-supervise the SVBRDF decomposition, we encourage the predicted ray color to be close to the physically-based rendering result using the microfacet model. Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data, and outperforms the state-of-the-art results.
Author supplied keywords
Cite
CITATION STYLE
Li, Z., Song, L., Chen, Z., Du, X., Chen, L., Yuan, J., & Xu, Y. (2023). Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 7007–7016). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3612160
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.