The success of existing video super-resolution (VSR) algorithms stems mainly exploiting the temporal information from the neighboring frames. However, none of these methods have discussed the influence of the temporal redundancy in the patches with stationary objects and background and usually use all the information in the adjacent frames without any discrimination. In this paper, we observe that the temporal redundancy will bring adverse effect to the information propagation, which limits the performance of the most existing VSR methods and causes the severe generalization problem. Motivated by this observation, we aim to improve existing VSR algorithms by handling the temporal redundancy patches in an optimized manner. We develop two simple yet effective plug-and-play methods to improve the performance and the generalization ability of existing local and non-local propagation-based VSR algorithms on widely-used public videos. For more comprehensive evaluating the robustness and performance of existing VSR algorithms, we also collect a new dataset which contains a variety of public videos as testing set. Extensive evaluations show that the proposed methods can significantly improve the performance and the generalization ability of existing VSR methods on the collected videos from wild scenarios while maintain their performance on existing commonly used datasets.
CITATION STYLE
Huang, Y., Dong, H., Pan, J., Zhu, C., Liang, B., Guo, Y., … Wang, F. (2023). Boosting Video Super Resolution with Patch-Based Temporal Redundancy Optimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14260 LNCS, pp. 362–375). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-44195-0_30
Mendeley helps you to discover research relevant for your work.