VisualSem: A High-quality Knowledge Graph for Vision & Language

19Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An exciting frontier in natural language understanding (NLU) and generation (NLG) calls for (vision-and-) language models that can efficiently access external structured knowledge repositories. However, many existing knowledge bases only cover limited domains, or suffer from noisy data, and most of all are typically hard to integrate into neural language pipelines. To fill this gap, we release VisualSem: a high-quality knowledge graph (KG) which includes nodes with multilingual glosses, multiple illustrative images, and visually relevant relations. We also release a neural multi-modal retrieval model that can use images or sentences as inputs and retrieves entities in the KG. This multi-modal retrieval model can be integrated into any (neural network) model pipeline. We encourage the research community to use VisualSem for data augmentation and/or as a source of grounding, among other possible uses. VisualSem as well as the multi-modal retrieval models are publicly available and can be downloaded in this URL: https://github.com/iacercalixto/visualsem.

Cite

CITATION STYLE

APA

Alberts, H., Huang, N., Deshpande, Y. R., Liu, Y., Cho, K., Vania, C., & Calixto, I. (2021). VisualSem: A High-quality Knowledge Graph for Vision & Language. In MRL 2021 - 1st Workshop on Multilingual Representation Learning, Proceedings of the Conference (pp. 138–152). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.mrl-1.13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free