Visual spatial description (VSD) aims to generate texts that describe the spatial relations of the given objects within images. Existing VSD work merely models the 2D geometrical vision features, thus inevitably falling prey to the problem of skewed spatial understanding of target objects. In this work, we investigate the incorporation of 3D scene features for VSD. With an external 3D scene extractor, we obtain the 3D objects and scene features for input images, based on which we construct a target object-centered 3D spatial scene graph (GO3DS2G), such that we model the spatial semantics of target objects within the holistic 3D scenes. Besides, we propose a scene subgraph selecting mechanism, sampling topologically-diverse subgraphs from GO3D-S2G, where the diverse local structure features are navigated to yield spatially-diversified text generation. Experimental results on two VSD datasets demonstrate that our framework outperforms the baselines significantly, especially improving on the cases with complex visual spatial relations. Meanwhile, our method can produce more spatially-diversified generation. Code is available at https://github.com/zhaoyucs/VSD.
CITATION STYLE
Zhao, Y., Fei, H., Ji, W., Wei, J., Zhang, M., Zhang, M., & Chua, T. S. (2023). Generating Visual Spatial Description via Holistic 3D Scene Understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 7960–7977). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.442
Mendeley helps you to discover research relevant for your work.