Computationally efficient MCTF for MC-EZBC scalable video coding framework

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The discrete wavelet transforms (DWTs) applied temporally under motion compensation (i.e. Motion Compensation Temporal Filtering (MCTF)) has recently become a very powerful tool in scalable video compression, especially when implemented through lifting. The major bottleneck for speed of the encoder is the computational complexity of the bidirectional motion estimation in MCTF. This paper proposes a novel predictive technique to reduce the computational complexity of MCTF. In the proposed technique the temporal filtering is done without motion compensation. The resultant high frequency frames are used to predict the blocks under motion. Motion estimation is carried out only for the predicted blocks under motion. This significantly reduces the number of blocks that undergoes motion estimation and hence the computationally complexity of MCTF is reduced by 44% to 92% over variety of standard test sequences without compromising the quality of the decoded video. The proposed algorithm is implemented in MC-EZBC, a 3D-subband scalable video coding system. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Karunakar, A. K., & Pai, M. M. M. (2007). Computationally efficient MCTF for MC-EZBC scalable video coding framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4815 LNCS, pp. 666–673). Springer Verlag. https://doi.org/10.1007/978-3-540-77046-6_82

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free