Learning spatial knowledge for text to 3D scene generation

146Citations
Citations of this article
205Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We address the grounding of natural language to concrete spatial constraints, and inference of implicit pragmatics in 3D environments. We apply our approach to the task of text-to-3D scene generation. We present a representation for common sense spatial knowledge and an approach to extract it from 3D scene data. In text-to- 3D scene generation, a user provides as input natural language text from which we extract explicit constraints on the objects that should appear in the scene. The main innovation of this work is to show how to augment these explicit constraints with learned spatial knowledge to infer missing objects and likely layouts for the objects in the scene. We demonstrate that spatial knowledge is useful for interpreting natural language and show examples of learned knowledge and generated 3D scenes.

Cite

CITATION STYLE

APA

Chang, A. X., Savva, M., & Manning, C. D. (2014). Learning spatial knowledge for text to 3D scene generation. In EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 2028–2038). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/d14-1217

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free