Video recovery via learning variation and consistency of images

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Matrix completion algorithms have been popularly used to recover images with missing entries, and they are proved to be very effective. Recent works utilized tensor completion models in video recovery assuming that all video frames are homogeneous and correlated. However, real videos are made up of different episodes or scenes, i.e. heterogeneous. Therefore, a video recovery model which utilizes both video spatiotemporal consistency and variation is necessary. To solve this problem, we propose a new video recovery method Sectional Trace Norm with Variation and Consistency Constraints (STN-VCC). In our model, capped ℓ1-norm regularization is utilized to learn the spatial-temporal consistency and variation between consecutive frames in video clips. Meanwhile, we introduce a new low-rank model to capture the low-rank structure in video frames with a better approximation of rank minimization than traditional trace norm. An efficient optimization algorithm is proposed, and we also provide a proof of convergence in the paper. We evaluate the proposed method via several video recovery tasks and experiment results show that our new method consistently outperforms other related approaches.

Cite

CITATION STYLE

APA

Huo, Z., Gao, S., Cai, W., & Huang, H. (2017). Video recovery via learning variation and consistency of images. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 4082–4088). AAAI press. https://doi.org/10.1609/aaai.v31i1.11241

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free