Intrinsic image decomposition is a highly ill-posed problem in computer vision referring to extract albedo and shading from an image. In this paper, we regard it as an image-to-image translation issue and propose a novel thought, which makes use of parallel convolutional neural networks (ParCNN) to learn albedo and shading with different spatial features and data distributions, respectively. At the same time, the energy is preserved as much as possible under the constraint of image reconstruction loss shared by the two networks. Moreover, we add the gradient prior based on the traditional image formation process into the loss function, which can lead to a performance improvement of our basic learning model by jointing advantages of the physically-based method and the data-driven method. We choose MPI Sintel dataset for model training and testing. Quantitative and qualitative evaluation results outperform the state-of-the-art methods.
CITATION STYLE
Yuan, Y., Sheng, B., Li, P., Bi, L., Kim, J., & Wu, E. (2019). Deep Intrinsic Image Decomposition Using Joint Parallel Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11542 LNCS, pp. 336–341). Springer Verlag. https://doi.org/10.1007/978-3-030-22514-8_28
Mendeley helps you to discover research relevant for your work.