Most semi-supervised video object segmentation methods rely on a pixel-accurate mask of a target object provided for the first video frame. However, obtaining a detailed mask is expensive and time-consuming. In this work we explore a more practical and natural way of identifying a target object by employing language referring expressions. Leveraging recent advances of language grounding models designed for images, we propose an approach to extend them to video data, ensuring temporally coherent predictions. To evaluate our approach we augment the popular video object segmentation benchmarks, DAVIS 16 and DAVIS 17 , with language descriptions of target objects. We show that our approach performs on par with the methods which have access to the object mask on DAVIS 16 and is competitive to methods using scribbles on challenging DAVIS 17 .
CITATION STYLE
Khoreva, A., Rohrbach, A., & Schiele, B. (2019). Video object segmentation with referring expressions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11132 LNCS, pp. 7–12). Springer Verlag. https://doi.org/10.1007/978-3-030-11018-5_2
Mendeley helps you to discover research relevant for your work.