Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers. However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames. Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering. Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA. Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code is publicly available at https://github.com/lyuchenyang/Semantic-aware-VideoQA.

Cite

CITATION STYLE

APA

Lyu, C., Ji, T., Graham, Y., & Foster, J. (2023). Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 4, pp. 50–56). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-srw.7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free