Spatial-temporal motion compensation based video super resolution

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Due to the arbitrary motion patterns in practical video, annoying artifacts cased by the registration error often appears in the super resolution outcome. This paper proposes a spatial-temporal motion compensation based super resolution fusion method (STMC) for video after explicit motion estimation between a few neighboring frames. We first register the neighboring low resolution frames to proper positions in the high resolution frame, and then use the registered low resolution information as non-local redundancy to compensate the surrounding positions which have no or a few registered low resolution pixels. Experimental results indicate the proposed method can effectively reduce the artifacts cased by the motion estimation error with obvious performance improvement in both PSNR and visual effect. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

An, Y., Lu, Y., & Yan, Z. (2011). Spatial-temporal motion compensation based video super resolution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6493 LNCS, pp. 282–292). https://doi.org/10.1007/978-3-642-19309-5_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free