Sync-NeRF: Generalizing Dynamic NeRFs to Unsynchronized Videos

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Recent advancements in 4D scene reconstruction using neural radiance fields (NeRF) have demonstrated the ability to represent dynamic scenes from multi-view videos. However, they fail to reconstruct the dynamic scenes and struggle to fit even the training views in unsynchronized settings. It happens because they employ a single latent embedding for a frame while the multi-view images at the same frame were actually captured at different moments. To address this limitation, we introduce time offsets for individual unsynchronized videos and jointly optimize the offsets with NeRF. By design, our method is applicable for various baselines and improves them with large margins. Furthermore, finding the offsets naturally works as synchronizing the videos without manual effort. Experiments are conducted on the common Plenoptic Video Dataset and a newly built Unsynchronized Dynamic Blender Dataset to verify the performance of our method. Project page: https://seoha-kim.github.io/sync-nerf.

Cite

CITATION STYLE

APA

Kim, S., Bae, J., Yun, Y., Lee, H., Bang, G., & Uh, Y. (2024). Sync-NeRF: Generalizing Dynamic NeRFs to Unsynchronized Videos. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 2777–2785). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i3.28057

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free