Adaptive motion estimation and video vector quantization based on spatiotemporal non-linearities of human perception

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The two main tasks of a video coding system are motion estimation and vector quantization of the signal. In this work a new splitting criterion to control the adaptive decomposition for the non-uniform optical flow estimation is exposed. Also, a novel bit allocation procedure is proposed for the quantization of the DCT transform of the video signal. These new approaches are founded on a perception model that reproduce the relative importance given by the human visual system to any location in the spatial frequency, temporal frequency and amplitude domain of the DCT transform. The experiments show that the proposed procedures behave better than their equivalent (fixed-block-size motion estimation and fixed-step-size quantization of the spatial DCT) used by MPEG-2.

Cite

CITATION STYLE

APA

Malo, J., Ferri, F., Albert, J., & Artigas, J. M. (1997). Adaptive motion estimation and video vector quantization based on spatiotemporal non-linearities of human perception. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1310, pp. 454–461). Springer Verlag. https://doi.org/10.1007/3-540-63507-6_232

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free