Dynamic texture extraction and video denoising

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

According to recent works, introduced by Y.Meyer [1] the decomposition models based on Total Variation (TV) appear as a very good way to extract texture from image sequences. Indeed, videos show up characteristic variations along the temporal dimension which can be catched in the decomposition framework. However, there are very few works in literature which deal with spatio-temporal decompositions. Thus, we devote this paper to spatio-temporal extension of the spatial color decomposition model. We provide a relevant method to accurately catch Dynamic Textures (DT) present in videos. Moreover, we obtain the spatio-temporal regularized part (the geometrical component), and we distinctly separate the highly oscillatory variations, (the noise). Furthermore, we present some elements of comparison between several models in denoising purpose. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Lugiez, M., Ménard, M., & El-Hamidi, A. (2009). Dynamic texture extraction and video denoising. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5807 LNCS, pp. 242–252). https://doi.org/10.1007/978-3-642-04697-1_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free