FlowChroma - A deep recurrent neural network for video colorization

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We develop an automated video colorization framework that minimizes the flickering of colors across frames. If we apply image colorization techniques to successive frames of a video, they treat each frame as a separate colorization task. Thus, they do not necessarily maintain the colors of a scene consistently across subsequent frames. The proposed solution includes a novel deep recurrent encoder-decoder architecture which is capable of maintaining temporal and contextual coherence between consecutive frames of a video. We use a high-level semantic feature extractor to automatically identify the context of a scenario including objects, with a custom fusion layer that combines the spatial and temporal features of a frame sequence. We demonstrate experimental results, qualitatively showing that recurrent neural networks can be successfully used to improve color consistency in video colorization.

Cite

CITATION STYLE

APA

Wijesinghe, T., Abeysinghe, C., Wijayakoon, C., Jayathilake, L., & Thayasivam, U. (2020). FlowChroma - A deep recurrent neural network for video colorization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12131 LNCS, pp. 16–29). Springer. https://doi.org/10.1007/978-3-030-50347-5_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free