The Convolution Neural Network (CNN) is widely used in the super-resolution task of depth map. However, the ones with simple architecture and high efficiency generally lack accuracy, while the ones with high accuracy demonstrate low efficiency and training difficulties due to their over-deep level and complex architecture. We propose a depth map super-resolution fusion framework. This framework fuses multiple Progressive Convolution Neural Networks (PCNNs) with different architectures by a pixel-wise Partial Differential Equation (PDE). Each individual PCNN uses progressive learning and deep supervising to construct a mapping from low resolution space to high resolution space. The PDE model automatically classifies and processes the high-resolution depth maps with different feature output by fusing multiple PCNNs. The fusion term in PDE is used to preserve or integrate the complementary features of the depth maps, and the divergence term in PDE is used to remove noise to improve the spatial accuracy and visual effect of the final output depth map. This method enables simple structured Neural Networks with high accuracy, high efficiency and relatively simple network training for depth map super-resolution.
CITATION STYLE
Li, S., Zhang, B., Zhu, W., & Yang, X. (2020). FMPN: Fusing multiple progressive cNNs for depth map super-resolution. IEEE Access, 8, 170754–170768. https://doi.org/10.1109/ACCESS.2020.3024650
Mendeley helps you to discover research relevant for your work.