Omnidirectional Free-viewpoint Rendering Using a Deformable 3-D Mesh Model

  • Sato T
  • Koshizawa H
  • Yokoya N
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes a method to render free viewpoint images from omnidirectional videos using a deformable 3-D mesh model. In the proposed method, a 3-D mesh is placed in front of a virtual viewpoint and deformed by using the pre-estimated omnidirectional depth maps that are selected on the basis of position and posture of the virtual viewpoint. Although our approach is fundamentally based on the model-based rendering approach that renders a geometrically correct virtualized world, in order to avoid the hole problem, we newly employ a viewpoint-dependent deformable 3-D model instead of the use of a unified 3-D model that is generally used in the model based rendering approach. In experiments, free-viewpoint images are generated from the omnidirectional video captured by an omnidirectional multi camera system to show the feasibility of the proposed method for walk-through applications in the virtualized environment.

Cite

CITATION STYLE

APA

Sato, T., Koshizawa, H., & Yokoya, N. (2010). Omnidirectional Free-viewpoint Rendering Using a Deformable 3-D Mesh Model. International Journal of Virtual Reality, 9(1), 37–44. https://doi.org/10.20870/ijvr.2010.9.1.2760

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free