We introduce an unsupervised GAN-based model for shading photorealistic hair animations. Our model is much faster than previous rendering algorithms and produces fewer artifacts than other neural image translation methods. The main idea is to extend the Cycle-GAN structure to avoid semitransparent hair appearance and to exactly reproduce the interaction of the lights with the scene. We use two constraints to ensure temporal coherence and highlight stability. Our approach outperforms and is computationally more efficient than previous methods.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Qiao, Z., & Kanai, T. (2021). A GAN-based temporally stable shading model for fast animation of photorealistic hair. Computational Visual Media, 7(1), 127–138. https://doi.org/10.1007/s41095-020-0201-9