Learning-based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing

39Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem. This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials. Specifically, we introduce a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo. In addition, we create a large-scale, photorealistic indoor scene dataset with significantly richer details like complex furniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork-based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, we demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity.

Cite

CITATION STYLE

APA

Zhu, J., Luan, F., Huo, Y., Lin, Z., Zhong, Z., Xi, D., … Tang, R. (2022). Learning-based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing. In Proceedings - SIGGRAPH Asia 2022 Conference Papers. Association for Computing Machinery, Inc. https://doi.org/10.1145/3550469.3555407

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free