Learning Super-Resolution of Environment Matting of Transparent Objects From a Single Image

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

This paper addresses the problem of super-resolution of environment matting of transparent objects. In contrast to traditional methods of environment matting of transparent objects, which often require a large number of input images or complex camera setups, recent approaches using convolutional neural networks are more practical. In particular, after training, they can generate the environment mattes using a single image. However, they still do not have super-resolution capabilities. This paper first proposes an encoder-decoder network with restoration units for super-resolution environment matting, called Enhanced Transparent Object Matting Network (ETOM-Net). Then, we introduce a refinement phase to improve the details of the output further. The ETOM-Net effectively recovers lost features in the LR input images and produces visually plausible HR environment mattes and the corresponding reconstructed images, demonstrating our method's effectiveness.

Cite

CITATION STYLE

APA

Hang, Z., & Yang, Y. H. (2022). Learning Super-Resolution of Environment Matting of Transparent Objects From a Single Image. IEEE Access, 10, 3548–3558. https://doi.org/10.1109/ACCESS.2022.3140466

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free