We present a learning-based technique for estimating high dynamic range (HDR), omnidirectional illumination from a single low dynamic range (LDR) portrait image captured under arbitrary indoor or outdoor lighting conditions. We train our model using portrait photos paired with their ground truth illumination. We generate a rich set of such photos by using a light stage to record the reflectance field and alpha matte of 70 diverse subjects in various expressions. We then relight the subjects using image-based relighting with a database of one million HDR lighting environments, compositing them onto paired high-resolution background imagery recorded during the lighting acquisition. We train the lighting estimation model using rendering-based loss functions and add a multi-scale adversarial loss to estimate plausible high frequency lighting detail. We show that our technique outperforms the state-of-The-Art technique for portrait-based lighting estimation, and we also show that our method reliably handles the inherent ambiguity between overall lighting strength and surface albedo, recovering a similar scale of illumination for subjects with diverse skin tones. Our method allows virtual objects and digital characters to be added to a portrait photograph with consistent illumination. As our inference runs in real-Time on a smartphone, we enable realistic rendering and compositing of virtual objects into live video for augmented reality.
CITATION STYLE
Legendre, C., Ma, W. C., Pandey, R., Fanello, S., Rhemann, C., Dourgarian, J., … Debevec, P. (2020). Learning Illumination from Diverse Portraits. In SIGGRAPH Asia 2020 Technical Communications, SA 2020. Association for Computing Machinery, Inc. https://doi.org/10.1145/3410700.3425432
Mendeley helps you to discover research relevant for your work.