Abstract
We propose a differentiable rendering algorithm for efficient novel view synthesis. By departing from volume-based representations in favor of a learned point representation, we improve on existing methods more than an order of magnitude in memory and runtime, both in training and inference. The method begins with a uniformly-sampled random point cloud and learns per-point position and view-dependent appearance, using a differentiable splat-based renderer to train the model to reproduce a set of input training images with the given pose. Our method is up to 300 × faster than NeRF in both training and inference, with only a marginal sacrifice in quality, while using less than 10 MB of memory for a static scene. For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at a near interactive rate, while maintaining high image quality and temporal coherence even without imposing any temporal-coherency regularizers.
Author supplied keywords
Cite
CITATION STYLE
Zhang, Q., Baek, S. H., Rusinkiewicz, S., & Heide, F. (2022). Differentiable Point-Based Radiance Fields for Efficient View Synthesis. In Proceedings - SIGGRAPH Asia 2022 Conference Papers. Association for Computing Machinery, Inc. https://doi.org/10.1145/3550469.3555413
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.