Mixpred: Video prediction beyond optical flow

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Video prediction is a meaningful task for it has a wide range of application scenarios. And it is also a challenging task since it needs to learn the internal representation of a given video for both appearance and motion dynamics. The existing methods regard this problem as a spatiotemporal sequence forecasting problem and try to resolve it in a one-shot fashion, which causes the prediction result being blurry or inaccurate. So, a more intuitional thought is to split this problem into two parts: model the dynamic pattern of the given video and learn the appearance representation of given video frames. In this paper, we develop a novel network structure named MixPred based on this idea to address this issue. We divide the prediction problem into two parts as mentioned above and build two subnets to solve these two parts separately. Instead of fusing the results of subnets at the final layer, we put forward a parallel interaction style through the whole process to merge the dynamic information and content information in a more natural way. Besides, we propose three different connection methods for exploring the most effective connection structure. We trained the model on UCF-101 and KITTI, and testing our model on UCF-101, KITTI, and Caltech. The results demonstrate that our method achieves state-of-the-art both quantitatively and qualitatively.

Cite

CITATION STYLE

APA

Yan, J., Qin, G., Zhao, R., Liang, Y., & Xu, Q. (2019). Mixpred: Video prediction beyond optical flow. IEEE Access, 7, 185654–185665. https://doi.org/10.1109/ACCESS.2019.2961383

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free