Depth maps acquired by low-cost sensors have low spatial resolution, which restricts their usefulness in many image processing and computer vision tasks. To increase the spatial resolution of the depth map, most state-of-the-art depth map super-resolution methods based on deep learning extract the features from a high-resolution guidance image and concatenate them with the features from the depth map. However, such simple concatenation can transfer unnecessary textures, known as texture copying artifacts, of the guidance image to the depth map. To address this problem, we propose a novel depth map super-resolution method using guided deformable convolution. Unlike standard deformable convolution, guided deformable convolution obtains 2D kernel offsets of the depth features from the guidance features. Because the guidance features are not explicitly concatenated with the depth features but are used only to determine the kernel offsets for the depth features, the proposed method can significantly alleviate the texture copying artifacts in the resultant depth map. Experimental results show that the proposed method outperforms the state-of-the-art methods in terms of qualitative and quantitative evaluations.
CITATION STYLE
Kim, J. Y., Ji, S., Baek, S. J., Jung, S. W., & Ko, S. J. (2021). Depth Map Super-Resolution Using Guided Deformable Convolution. IEEE Access, 9, 66626–66635. https://doi.org/10.1109/ACCESS.2021.3076853
Mendeley helps you to discover research relevant for your work.