Abstract
One of the ways blind people understand their surroundings is by clicking images and relying on descriptions generated by image captioning systems. Current work on captioning images for the visually impaired do not use the textual data present in the image when generating captions. This problem is critical as many visual scenes contain text. Moreover, up to 21% of the questions asked by blind people about the images they click pertain to the text present in them (Bigham et al., 2010). In this work, we propose altering AoANet, a state-of-the-art image captioning model, to leverage the text detected in the image as an input feature. In addition, we use a pointer-generator mechanism to copy the detected text to the caption when tokens need to be reproduced accurately. Our model outperforms AoANet on the benchmark dataset VizWiz, giving a 35% and 16.2% performance improvement on CIDEr and SPICE scores, respectively.
Cite
CITATION STYLE
Ahsan, H., Bhalla, N., Bhatt, D., & Shah, K. (2021). Multi-Modal Image Captioning for the Visually Impaired. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Student Research Workshop (pp. 53–60). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-srw.8
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.