Efficient Neural Radiance Fields for Interactive Free-viewpoint Video

72Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper aims to tackle the challenge of efficiently producing interactive free-viewpoint videos. Some recent works equip neural radiance fields with image encoders, enabling them to generalize across scenes. When processing dynamic scenes, they can simply treat each video frame as an individual scene and perform novel view synthesis to generate free-viewpoint videos. However, their rendering process is slow and cannot support interactive applications. A major factor is that they sample lots of points in empty space when inferring radiance fields. We propose a novel scene representation, called ENeRF, for the fast creation of interactive free-viewpoint videos. Specifically, given multi-view images at one frame, we first build the cascade cost volume to predict the coarse geometry of the scene. The coarse geometry allows us to sample few points near the scene surface, thereby significantly improving the rendering speed. This process is fully differentiable, enabling us to jointly learn the depth prediction and radiance field networks from RGB images. Experiments on multiple benchmarks show that our approach exhibits competitive performance while being at least 60 times faster than previous generalizable radiance field methods.

Cite

CITATION STYLE

APA

Lin, H., Peng, S., Xu, Z., Yan, Y., Shuai, Q., Bao, H., & Zhou, X. (2022). Efficient Neural Radiance Fields for Interactive Free-viewpoint Video. In Proceedings - SIGGRAPH Asia 2022 Conference Papers. Association for Computing Machinery, Inc. https://doi.org/10.1145/3550469.3555376

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free