Improving Scene Graph Classification by Exploiting Knowledge from Texts

10Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Training scene graph classification models requires a large amount of annotated image data. Meanwhile, scene graphs represent relational knowledge that can be modeled with symbolic data from texts or knowledge graphs. While image annotation demands extensive labor, collecting textual descriptions of natural scenes requires less effort. In this work, we investigate whether textual scene descriptions can substitute for annotated image data. To this end, we employ a scene graph classification framework that is trained not only from annotated images but also from symbolic data. In our architecture, the symbolic entities are first mapped to their correspondent image-grounded representations and then fed into the relational reasoning pipeline. Even though a structured form of knowledge, such as the form in knowledge graphs, is not always available, we can generate it from unstructured texts using a transformer-based language model. We show that by fine-tuning the classification pipeline with the extracted knowledge from texts, we can achieve ∼8x more accurate results in scene graph classification, ∼3x in object classification, and ∼1.5x in predicate classification, compared to the supervised baselines with only 1% of the annotated images.

Cite

CITATION STYLE

APA

Sharifzadeh, S., Baharlou, S. M., Schmitt, M., Schütze, H., & Tresp, V. (2022). Improving Scene Graph Classification by Exploiting Knowledge from Texts. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 2189–2197). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i2.20116

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free