FilterNet: Self-Supervised Learning for High-Resolution Photo Enhancement

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose an end-to-end model for high-resolution photographic enhancement in a self-supervised learning approach. Bad quality photos are created from an unlabelled dataset of good quality images. Our deep network model is presented with pairs of bad and good images, and learns the parameters of 6 photographic filters that improve the bad photos by trying to make them similar to the high-quality references. Custom rendering layers apply the photographic filters and compute their derivatives during the forward training pass, so loss attribution can be performed during the backward pass. Our experiments confirm that loss functions based on feature-extraction networks achieve better quality than pixel-comparison metrics. To mimic professional editing applications, our filters are based on curve mapping and alpha blending, and they are rendered using a linear RGB colorspace for mathematical accuracy. At inference time, the custom rendering layers are removed so the model's output is just the set of filter parameters that best improve the input image. We achieve high-resolution results by applying the predicted filters to the photo captured by the user, even though training and prediction take place using downscaled thumbnails. Our approach has been validated in a professional mobile application.

Cite

CITATION STYLE

APA

Cuenca-Jiménez, P. M., Fernandez-Conde, J., & Cañas-Plaza, J. M. (2022). FilterNet: Self-Supervised Learning for High-Resolution Photo Enhancement. IEEE Access, 10, 2669–2685. https://doi.org/10.1109/ACCESS.2021.3139778

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free