Evaluating Pragmatic Abilities of Image Captioners on A3DS

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Evaluating grounded neural language model performance with respect to pragmatic qualities like the trade off between truthfulness, contrastivity and overinformativity of generated utterances remains a challenge in absence of data collected from humans. To enable such evaluation, we present a novel open source image-text dataset “Annotated 3D Shapes” (A3DS) comprising over nine million exhaustive natural language annotations and over 12 million variable-granularity captions for the 480,000 images provided by Burgess and Kim (2018). We showcase the evaluation of pragmatic abilities developed by a task-neutral image captioner fine-tuned in a multi-agent communication setting to produce contrastive captions. The evaluation is enabled by the dataset because the exhaustive annotations allow to quantify the presence of contrastive features in the model’s generations. We show that the model develops human-like patterns (informativity, brevity, over-informativity for specific features (e.g., shape, color biases)).

Cite

CITATION STYLE

APA

Tsvilodub, P., & Franke, M. (2023). Evaluating Pragmatic Abilities of Image Captioners on A3DS. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 1277–1285). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-short.110

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free