TiV-ODE: A Neural ODE-based Approach for Controllable Video Generation from Text-Image Pairs

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Videos capture the evolution of continuous dynamical systems over time in the form of discrete image sequences. Recently, video generation models have been widely used in robotic research. However, generating controllable videos from image-text pairs is an important yet underexplored research topic in both robotic and computer vision communities. This paper introduces an innovative and elegant framework named TiV-ODE, formulating this task as modeling the dynamical system in a continuous space. Specifically, our framework leverages the ability of Neural Ordinary Differential Equations (Neural ODEs) to model the complex dynamical system depicted by videos as a nonlinear ordinary differential equation. The resulting framework offers control over the generated videos' dynamics, content, and frame rate, a feature not provided by previous methods. Experiments demonstrate the ability of the proposed method to generate highly controllable and visually consistent videos and its capability of modeling dynamical systems. Overall, this work is a significant step towards developing advanced controllable video generation models that can handle complex and dynamic scenes.

Cite

CITATION STYLE

APA

Xu, Y., Li, N., Goel, A., Yao, Z., Guo, Z., Kasaei, H., … Li, Z. (2024). TiV-ODE: A Neural ODE-based Approach for Controllable Video Generation from Text-Image Pairs. In Proceedings - IEEE International Conference on Robotics and Automation (pp. 14645–14652). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICRA57147.2024.10610149

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free