Time-varying surface reconstruction of an actor’s performance

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a fully automatic time-varying surface reconstruction of an actor’s performance captured from a production stage through omnidirectional video. The resulting mesh and its texture can then directly be edited in post-production. Our method makes no assumption on the costumes or accessories present in the recording. We take as input a raw sequence of volumetric static poses reconstructed from video sequences acquired in a multi-viewpoint chroma-key studio. The first frame is chosen as the reference mesh. An iterative approach is applied throughout the sequence in order to induce a deformation of the reference mesh for all input frames. At first, a pseudo-rigid transformation adjusts the pose to match the input visual hull as closely as possible. Then, local deformation is added to reconstruct fine details. We provide examples of actors’ performance inserted into virtual scenes, including dynamic interaction with the environment.

Cite

CITATION STYLE

APA

Blache, L., Desbrun, M., Loscos, C., & Lucas, L. (2015). Time-varying surface reconstruction of an actor’s performance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9474, pp. 92–101). Springer Verlag. https://doi.org/10.1007/978-3-319-27857-5_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free