3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we build a multi-style generative model for stylish image captioning which uses multimodality image features, ResNeXt features, and text features generated by DenseCap. We propose the 3M model, a Multi-UPDOWN caption model that encodes multi-modality features and decodes them into captions. We demonstrate the effectiveness of our model on generating human-like captions by examining its performance on two datasets, the PERSONALITYCAPTIONS dataset, and the FlickrStyle10K dataset. We compare against a variety of state-of-the-art baselines on various automatic NLP metrics such as BLEU, ROUGE-L, CIDEr, SPICE, etc1 . A qualitative study has also been done to verify our 3M model can be used for generating different stylized captions.

Cite

CITATION STYLE

APA

Li, C., & Harrison, B. (2021). 3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model. In Proceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS (Vol. 34). Florida Online Journals, University of Florida. https://doi.org/10.32473/flairs.v34i1.128380

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free