Advertisement videos (ads) play an integral part in the domain of Internet e-commerce, as they amplify the reach of particular products to a broad audience or can serve as a medium to raise awareness about specific issues through concise narrative structures. The narrative structures of advertisements involve several elements like reasoning about the broad content (topic and the underlying message) and examining fine-grained details involving the transition of perceived tone due to the sequence of events and interaction among characters. In this work, to facilitate the understanding of advertisements along the three dimensions of topic categorization, perceived tone transition, and social message detection, we introduce a multimodal multilingual benchmark called MM-AU comprised of 8.4 K videos (147hrs) curated from multiple web-based sources. We explore multiple zero-shot reasoning baselines through the application of large language models on the ads transcripts. Further, we demonstrate that leveraging signals from multiple modalities, including audio, video, and text, in multimodal transformer-based supervised models leads to improved performance compared to unimodal approaches.
CITATION STYLE
Bose, D., Hebbar, R., Feng, T., Somandepalli, K., Xu, A., & Narayanan, S. (2023). MM-AU:Towards Multimodal Understanding of Advertisement Videos. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 86–95). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3612371
Mendeley helps you to discover research relevant for your work.