GeoSynth: A Photorealistic Synthetic Indoor Dataset for Scene Understanding

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep learning has revolutionized many scene perception tasks over the past decade. Some of these improvements can be attributed to the development of large labeled datasets. The creation of such datasets can be an expensive, time-consuming, and imperfect process. To address these issues, we introduce GeoSynth, a diverse photorealistic synthetic dataset for indoor scene understanding tasks. Each GeoSynth exemplar contains rich labels including segmentation, geometry, camera parameters, surface material, lighting, and more. We demonstrate that supplementing real training data with GeoSynth can significantly improve network performance on perception tasks, like semantic segmentation. A subset of our dataset will be made publicly available at https://github.com/geomagical/GeoSynth.

Cite

CITATION STYLE

APA

Pugh, B., Chernak, D., & Jiddi, S. (2023). GeoSynth: A Photorealistic Synthetic Indoor Dataset for Scene Understanding. IEEE Transactions on Visualization and Computer Graphics, 29(5), 2586–2595. https://doi.org/10.1109/TVCG.2023.3247087

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free