Deep video dehazing

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Haze is a major problem in videos captured in outdoors. Unlike single-image dehazing, video-based approaches can take advantage of the abundant information that exists across neighboring frames. In this work, assuming that a scene point yields highly correlated transmission values between adjacent video frames, we develop a deep learning solution for video dehazing, where a CNN is trained end-to-end to learn how to accumulate information across frames for transmission estimation. The estimated transmission map is subsequently used to recover a haze-free frame via atmospheric scattering model. To train this network, we generate a dataset consisted of synthetic hazy and haze-free videos for supervision based on the NYU depth dataset. We show that the features learned from this dataset are capable of removing haze that arises in outdoor scene in a wide range of videos. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world videos.

Cite

CITATION STYLE

APA

Ren, W., & Cao, X. (2018). Deep video dehazing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10735 LNCS, pp. 14–24). Springer Verlag. https://doi.org/10.1007/978-3-319-77380-3_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free