Pixel-Level fusion for infrared and visible acquisitions

14Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

This article presents a unique combined routine to fuse Long Wave Infrared (7.5-13 micron) with visible (0.38-0.78 micron) acquisitions, and to track pedestrians and road information in night or low light driving scenarios. Three fusion levels are presented and discussed: pixel-level, feature-level and decision-level. A pixel-level fusion is then used, through a novel combination of an adaptive weighting algorithm for un-saturated data points, and a PCA statistics for saturated pixels. The registration is done at the hardware level with Gaussian smoothing, while the smoothed histograms provide initial threshold values. The proposed routine is then applied for different night-time driving scenarios, further compared with the tracking results for non-fused thermal imagery from a commercial night-vision module. The fused imagery coupled with the proposed pre and post-processing provide complete detection of reflective and emitting objects with better shape retrieval and orientation prediction, computed and judged through objects morphology. The result of current work is an enhanced passive display for: passing vehicles with glare, pedestrians and non-emitting road features, with its motion predicted.

Cite

CITATION STYLE

APA

Zhou, Y., & Omar, M. (2009). Pixel-Level fusion for infrared and visible acquisitions. International Journal of Optomechatronics, 3(1), 41–53. https://doi.org/10.1080/15599610902717835

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free