Generating Quantified Referring Expressions through Attention-Driven Incremental Perception

0Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.

Abstract

We model the production of quantified referring expressions (QREs) that identity collections of visual items. A previous approach, called Perceptual Cost Pruning, modeled human QRE production using a preference-based referring expression generation algorithm, first removing facts from the input knowledge base based on a model of perceptual cost. In this paper, we present an alternative model that incrementally constructs a symbolic knowledge base through simulating human visual attention/perception from raw images. We demonstrate that this model produces the same output as Perceptual Cost Pruning. We argue that this is a more extensible approach and a step toward developing a wider range of process-level models of human visual description.

Cite

CITATION STYLE

APA

Briggs, G. (2020). Generating Quantified Referring Expressions through Attention-Driven Incremental Perception. In INLG 2020 - 13th International Conference on Natural Language Generation, Proceedings (pp. 107–112). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.inlg-1.16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free