Tunable U-Net: Controlling Image-to-Image Outputs Using a Tunable Scalar Value

11Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Image-to-image conversion tasks are more accurate and sophisticated than ever thanks to advances in deep learning. However, since typical deep learning models are trained to perform only one task, multiple trained models are required to perform each task even if they are related to each other. For example, the popular image-to-image convolutional neural network, U-Net, is normally trained for a single task. Based on U-Net, this study proposes a model that outputs variable results using only one trained model. The proposed method produces a continuously changing output by setting an external parameter. We confirm the robustness of our proposed model by evaluating it on binarization and background blurring. According to these evaluations, we confirmed that the proposed model can generate well-predicted outputs by using un-trained tuning parameters as well as the outputs by using trained tuning parameters. Furthermore, the proposed model can generate extrapolated outputs outside the learning range.

Cite

CITATION STYLE

APA

Kang, S., Uchida, S., & Iwana, B. K. (2021). Tunable U-Net: Controlling Image-to-Image Outputs Using a Tunable Scalar Value. IEEE Access, 9, 103279–103290. https://doi.org/10.1109/ACCESS.2021.3096530

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free