YTCommentQA: Video Question Answerability in Instructional Videos

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Instructional videos provide detailed how-to guides for various tasks, with viewers often posing questions regarding the content. Addressing these questions is vital for comprehending the content, yet receiving immediate answers is difficult. While numerous computational models have been developed for Video Question Answering (Video QA) tasks, they are primarily trained on questions generated based on video content, aiming to produce answers from within the content. However, in real-world situations, users may pose questions that go beyond the video’s informational boundaries, highlighting the necessity to determine if a video can provide the answer. Discerning whether a question can be answered by video content is challenging due to the multi-modal nature of videos, where visual and verbal information are intertwined. To bridge this gap, we present the YTCommentQA dataset, which contains naturally-generated questions from YouTube, categorized by their answerability and required modality to answer – visual, script, or both. Experiments with answerability classification tasks demonstrate the complexity of YTCommentQA and emphasize the need to comprehend the combined role of visual and script information in video reasoning. The dataset is available at https://github.com/lgresearch/YTCommentQA.

Cite

CITATION STYLE

APA

Yang, S., Park, S., Jang, Y., & Lee, M. (2024). YTCommentQA: Video Question Answerability in Instructional Videos. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 19359–19367). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i17.29906

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free