Pay attention to devils: A photometric stereo network for better details

43Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an attention-weighted loss in a photometric stereo neural network to improve 3D surface recovery accuracy in complex-structured areas, such as edges and crinkles, where existing learning-based methods often failed. Instead of using a uniform penalty for all pixels, our method employs the attention-weighted loss learned in a self-supervise manner for each pixel, avoiding blurry reconstruction result in such difficult regions. The network first estimates a surface normal map and an adaptive attention map, and then the latter is used to calculate a pixel-wise attention-weighted loss that focuses on complex regions. In these regions, the attention-weighted loss applies higher weights of the detail-preserving gradient loss to produce clear surface reconstructions. Experiments on real datasets show that our approach significantly outperforms traditional photometric stereo algorithms and state-of-the-art learning-based methods.

Cite

CITATION STYLE

APA

Ju, Y., Lam, K. M., Chen, Y., Qi, L., & Dong, J. (2020). Pay attention to devils: A photometric stereo network for better details. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 694–700). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/97

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free