Representation learning for scene graph completion via jointly structural and visual embedding

31Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper focuses on scene graph completion which aims at predicting new relations between two entities utilizing existing scene graphs and images. By comparing with the well-known knowledge graph, we first identify that each scene graph is associated with an image and each entity of a visual triple in a scene graph is composed of its entity type with attributes and grounded with a bounding box in its corresponding image. We then propose an end-to-end model named Representation Learning via Jointly Structural and Visual Embedding (RLSV) to take advantages of structural and visual information in scene graphs. In RLSV model, we provide a fully-convolutional module to extract the visual embeddings of a visual triple and apply hierarchical projection to combine the structural and visual embeddings of a visual triple. In experiments, we evaluate our model in two scene graph completion tasks: link prediction and visual triple classification, and further analyze by case studies. Experimental results demonstrate that our model outperforms all baselines in both tasks, which justifies the significance of combining structural and visual information for scene graph completion.

Cite

CITATION STYLE

APA

Wan, H., Luo, Y., Peng, B., & Zheng, W. S. (2018). Representation learning for scene graph completion via jointly structural and visual embedding. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 949–956). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/132

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free