Augmented Partial Mutual Learning with Frame Masking for Video Captioning

25Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Recent video captioning work improves greatly due to the invention of various elaborate model architectures. If multiple captioning models are combined into a unified framework not only by simple more ensemble, and each model can benefit from each other, the final captioning might be boosted further. Jointly training of multiple model have not been explored in previous works. In this paper, we propose a novel Augmented Partial Mutual Learning (APML) training method where multiple decoders are trained jointly with mimicry losses between different decoders and different input variations. Another problem of training captioning model is the”one-to-many” mapping problem which means that one identical video input is mapped to multiple caption annotations. To address this problem, we propose an annotation-wise frame masking approach to convert the”one-to-many” mapping to”one-to-one” mapping. The experiments performed on MSR-VTT and MSVD datasets demonstrate our proposed algorithm achieves the state-of-the-art performance.

Cite

CITATION STYLE

APA

Lin, K., Gan, Z., & Wang, L. (2021). Augmented Partial Mutual Learning with Frame Masking for Video Captioning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 3A, pp. 2047–2055). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i3.16301

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free