Abstract
Physically-based hair and fur rendering is crucial for visual realism. One of the key effects is global illumination, involving light bouncing between different fibers. This is very time-consuming to simulate with methods like path tracing. Efficient approximate global illumination techniques such as dual scattering are in widespread use, but are limited to human hair only, and cannot handle color bleeding, transparency and hair-object inter-reflection. We present the first global illumination model, based on dipole diffusion for subsurface scattering, to approximate light bouncing between individual fur fibers. We model complex light and fur interactions as subsurface scattering, and use a simple neural network to convert from fur fibers’ properties to scattering parameters. Our network is trained on only a single scene with different parameters, but applies to general scenes and produces visually accurate appearance, supporting color bleeding and further inter-reflections.
Author supplied keywords
Cite
CITATION STYLE
Yan, L. Q., Sun, W., Jensen, H. W., & Ramamoorthi, R. (2017). A BSSRDF model for efficient rendering of fur with global illumination. In ACM Transactions on Graphics (Vol. 36). Association for Computing Machinery. https://doi.org/10.1145/3130800.3130802
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.