We aim to automatically identify human action reasons in online videos. We focus on the widespread genre of lifestyle vlogs, in which people perform actions while verbally describing them. We introduce and make publicly available the WHYACT dataset, consisting of 1,077 visual actions manually annotated with their reasons. We describe a multimodal model that leverages visual and textual information to automatically infer the reasons corresponding to an action presented in the video.
CITATION STYLE
Ignat, O., Castro, S., Miao, H., Li, W., & Mihalcea, R. (2021). WHYACT: Identifying Action Reasons in Lifestyle Vlogs. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 4770–4785). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.392
Mendeley helps you to discover research relevant for your work.