Visually Grounded Concept Composition

3Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

We investigate ways to compose complex concepts in texts from primitive ones while grounding them in images. We propose Concept and Relation Graph (CRG), which builds on top of constituency analysis and consists of recursively combined concepts with predicate functions. Meanwhile, we propose a concept composition neural network called Composer to leverage the CRG for visually grounded concept learning. Specifically, we learn the grounding of both primitive and all composed concepts by aligning them to images and show that learning to compose leads to more robust grounding results, measured in text-to-image matching accuracy. Notably, our model can model grounded concepts forming at both the finer-grained sentence level and the coarser-grained intermediate level (or word-level). Composer leads to pronounced improvement in matching accuracy when the evaluation data has significant compound divergence from the training data.

Cite

CITATION STYLE

APA

Zhang, B., Hu, H., Qiu, L., Shaw, P., & Sha, F. (2021). Visually Grounded Concept Composition. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 201–215). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free