End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding

27Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

Abstract

Natural language spatial video grounding aims to detect the relevant objects in video frames with descriptive sentences as the query. In spite of the great advances, most existing methods rely on dense video frame annotations, which require a tremendous amount of human effort. To achieve effective grounding under a limited annotation budget, we investigate one-shot video grounding, and learn to ground natural language in all video frames with solely one frame labeled, in an end-to-end manner. One major challenge of end-to-end one-shot video grounding is the existence of videos frames that are either irrelevant to the language query or the labeled frames. Another challenge relates to the limited supervision, which might result in ineffective representation learning. To address these challenges, we designed an end-to-end model via Information Tree for One-Shot video grounding (IT-OS). Its key module, the information tree, can eliminate the interference of irrelevant frames based on branch search and branch cropping techniques. In addition, several self-supervised tasks are proposed based on the information tree to improve the representation learning under insufficient labeling. Experiments on the benchmark dataset demonstrate the effectiveness of our model.

References Powered by Scopus

Deep residual learning for image recognition

174329Citations
N/AReaders
Get full text

ImageNet Large Scale Visual Recognition Challenge

30428Citations
N/AReaders
Get full text

One-Shot video object segmentation

672Citations
N/AReaders
Get full text

Cited by Powered by Scopus

FedSeg: Class-Heterogeneous Federated Learning for Semantic Segmentation

31Citations
N/AReaders
Get full text

ML-LJP: Multi-Law Aware Legal Judgment Prediction

30Citations
N/AReaders
Get full text

WINNER: Weakly-supervised hIerarchical decompositioN and aligNment for spatio-tEmporal video gRounding

28Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Li, M., Wang, T., Zhang, H., Zhang, S., Zhao, Z., Miao, J., … Wu, F. (2022). End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 8707–8717). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.596

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 10

67%

Researcher 3

20%

Professor / Associate Prof. 1

7%

Lecturer / Post doc 1

7%

Readers' Discipline

Tooltip

Computer Science 14

74%

Neuroscience 2

11%

Linguistics 2

11%

Engineering 1

5%

Save time finding and organizing research with Mendeley

Sign up for free