A GAN-based temporally stable shading model for fast animation of photorealistic hair

4Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We introduce an unsupervised GAN-based model for shading photorealistic hair animations. Our model is much faster than previous rendering algorithms and produces fewer artifacts than other neural image translation methods. The main idea is to extend the Cycle-GAN structure to avoid semitransparent hair appearance and to exactly reproduce the interaction of the lights with the scene. We use two constraints to ensure temporal coherence and highlight stability. Our approach outperforms and is computationally more efficient than previous methods.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Qiao, Z., & Kanai, T. (2021). A GAN-based temporally stable shading model for fast animation of photorealistic hair. Computational Visual Media, 7(1), 127–138. https://doi.org/10.1007/s41095-020-0201-9

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

60%

Professor / Associate Prof. 2

40%

Readers' Discipline

Tooltip

Computer Science 7

70%

Engineering 3

30%

Save time finding and organizing research with Mendeley

Sign up for free