Low Light Video Enhancement Using Synthetic Data Produced with an Intermediate Domain Mapping

19Citations
Citations of this article
77Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Advances in low-light video RAW-to-RGB translation are opening up the possibility of fast low-light imaging on commodity devices (e.g. smartphone cameras) without the need for a tripod. However, it is challenging to collect the required paired short-long exposure frames to learn a supervised mapping. Current approaches require a specialised rig or the use of static videos with no subject or object motion, resulting in datasets that are limited in size, diversity, and motion. We address the data collection bottleneck for low-light video RAW-to-RGB by proposing a data synthesis mechanism, dubbed SIDGAN, that can generate abundant dynamic video training pairs. SIDGAN maps videos found ‘in the wild’ (e.g. internet videos) into a low-light (short, long exposure) domain. By generating dynamic video data synthetically, we enable a recently proposed state-of-the-art RAW-to-RGB model to attain higher image quality (improved colour, reduced artifacts) and improved temporal consistency, compared to the same model trained with only static real video data.

Cite

CITATION STYLE

APA

Triantafyllidou, D., Moran, S., McDonagh, S., Parisot, S., & Slabaugh, G. (2020). Low Light Video Enhancement Using Synthetic Data Produced with an Intermediate Domain Mapping. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12358 LNCS, pp. 103–119). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58601-0_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free