Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers. However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames. Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering. Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA. Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code is publicly available at https://github.com/lyuchenyang/Semantic-aware-VideoQA.
CITATION STYLE
Lyu, C., Ji, T., Graham, Y., & Foster, J. (2023). Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 4, pp. 50–56). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-srw.7
Mendeley helps you to discover research relevant for your work.