Exposure fusion is an efficient method to obtain a well exposed and detailed image from a scene with high dynamic range. However, this method fails when there is camera shake and/or object motions. In this work, we tackle this issue by replacing the pixel-based fusion by a fusion between pixels having similar neighborhood (patches) in images with different exposure settings. In order to achieve this, we compare patches in the luminance domain. We show through several experiments that this procedure yield comparable or better results than the state of the art, at a reasonable computing time.
CITATION STYLE
Ocampo-Blandon, C., & Gousseau, Y. (2017). Non-local exposure fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10125 LNCS, pp. 484–492). Springer Verlag. https://doi.org/10.1007/978-3-319-52277-7_59
Mendeley helps you to discover research relevant for your work.