Multi-exposure image fusion based on wavelet transform

20Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This article proposes a novel wavelet-based algorithm for the fusion of multi-exposed images. The luminance inversion is suppressed and the contrast of the fused image is enhanced, by introducing the brightness of input images into the well-exposedness weight. The weight is used to fuse the approximate sub-bands of input images in the wavelet domain. At the same time, the detail sub-bands of input images are fused by the adjusted contrast weight to avoid losing details around the strong edges. Besides, a novel enhancement function was proposed to enhance the details of the fused image. The proposed multi-exposure fusion scheme consists of three steps: (1) transforming the input images into YUV space and fusing the color-difference components U and V according to the saturation weight; (2) transforming the luminance component Y into the wavelet domain and fusing the corresponding approximate sub-bands and detail sub-bands by the well-exposedness weight and adjusted contrast weight, respectively; and (3) transforming the fused image back into RGB space to obtain the final result. The experiments illustrate that the proposed method is able to effectively preserve details, enhance contrast, and maintain consistency with the luminance distribution of input images.

Cite

CITATION STYLE

APA

Zhang, W., Liu, X., Wang, W., & Zeng, Y. (2018). Multi-exposure image fusion based on wavelet transform. International Journal of Advanced Robotic Systems, 15(2). https://doi.org/10.1177/1729881418768939

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free