Cross Transformer Network for Scale-Arbitrary Image Super-Resolution

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Since implicit neural representation methods can be utilized for continuous image representation learning, pixel values can be successfully inferred from a neural network model over a continuous spatial domain. The recent approaches focus on performing super-resolution tasks at arbitrary scales. However, their magnified images are often distorted and their results are inferior compared to single-scale super-resolution methods. This work proposes a novel CrossSR consisting of a base Cross Transformer structure. Benefiting from the global interactions between contexts through a self-attention mechanism of the Cross Transformer, the CrossSR could efficiently exploit cross-scale features. A dynamic position-coding module and a dense MLP operation are employed for continuous image representation to further improve the results. Extensive experimental and ablation studies show that our CrossSR obtained competitive performance compared to state-of-the-art methods, both for lightweight and classical image super-resolution.

Cite

CITATION STYLE

APA

He, D., Wu, S., Liu, J., & Xiao, G. (2022). Cross Transformer Network for Scale-Arbitrary Image Super-Resolution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13369 LNAI, pp. 633–644). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-10986-7_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free