Video inpainting in spatial-temporal domain based on adaptive background and color variance

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video inpainting is repairing the damage regions. Nowadays, video camera is usually used to record the visual memory in our life. When people recorded a video, some scenes (or some objects) which unwanted are presented in video sometimes, but it doesn’t record repeatedly based on some reasons. In order to solve this problem, in this paper, we propose a video inpainting method to effectively repair the damage regions based on the relationships of frames in temporal sequence and color variability in spatial domain. The procedures of the proposed method include adaptive background construction, removing the unwanted objects, and repairing the damage regions in temporal and spatial domains. Experimental results verify that our proposed method can obtain the good structure property and extremely reduce the computational time in inpainting.

Cite

CITATION STYLE

APA

Huang, H. Y., & Lin, C. H. (2016). Video inpainting in spatial-temporal domain based on adaptive background and color variance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9799, pp. 633–644). Springer Verlag. https://doi.org/10.1007/978-3-319-42007-3_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free