Image Adjustment for Multi-Exposure Images Based on Convolutional Neural Networks

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose an image adjustment method for multi-exposure images based on convolutional neural networks (CNNs). We call image regions without information due to saturation and object moving in multi-exposure images lacking areas in this paper. Lacking areas cause the ghosting artifact in fused images from sets of multi-exposure images by conventional fusion methods, which tackle the artifact. To avoid this problem, the proposed method estimates the information of lacking areas via adaptive inpainting. The proposed CNN consists of three networks, warp and refinement, detection, and inpainting networks. The second and third networks detect lacking areas and estimate their pixel values, respectively. In the experiments, it is observed that a simple fusion method with the proposed method outperforms state-of-the-art fusion methods in the peak signal-to-noise ratio. Moreover, the proposed method is applied for various fusion methods as pre-processing, and results show obviously reducing artifacts.

Cite

CITATION STYLE

APA

Funahashi, I., Yoshida, T., Zhang, X., & Iwahashi, M. (2022). Image Adjustment for Multi-Exposure Images Based on Convolutional Neural Networks. IEICE Transactions on Information and Systems, E105D(1), 123–133. https://doi.org/10.1587/transinf.2021EDP7087

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free