Visible and infrared video fusion using uniform discrete curvelet transform and spatial-temporal information

11Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Multiple visual sensor fusion provides an effective way to improve the robustness and accuracy of video surveillance system. Traditional video fusion methods fuse the source videos using static image fusion methods frame-by-frame without considering the information in temporal dimension. The temporal information can't be fully utilized in fusion procedure. Aiming at this problem, a visible and infrared video fusion method based on Uniform discrete curvelet transform (UDCT) and spatial-temporal information is proposed. The source videos are decomposed by using UDCT, and a set of local spatial-temporal energy based fusion rules are designed for decomposition coefficients. In these rules, we consider the current frame's coefficients and the coefficients on temporal dimension which are the coefficients of adjacent frames. Experimental results demonstrated that the proposed method works well and outperforms comparison methods in terms of temporal stability and consistency as well as spatial-temporal information extraction.

Cite

CITATION STYLE

APA

Li, Q., Du, J., & Xu, L. (2015). Visible and infrared video fusion using uniform discrete curvelet transform and spatial-temporal information. Chinese Journal of Electronics, 24(4), 761–766. https://doi.org/10.1049/cje.2015.10.016

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free