Improved Sensor Model for Realistic Synthetic Data Generation

9Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Synthetic, i.e., computer generated-imagery (CGI) data is a key component for training and validating deep-learning-based perceptive functions due to its ability to simulate rare cases, avoidance of privacy issues and easy generation of huge datasets with pixel accurate ground-truth data. Recent simulation and rendering engines simulate already a wealth of realistic optical effects, but are mainly focused on the human perception system. But, perceptive functions require realistic images modeled with sensor artifacts as close as possible towards the sensor the training data has been recorded with. In this paper we propose a method to improve the data synthesis by introducing a more realistic sensor model that implements a number of sensor and lens artifacts. We further propose a Wasserstein distance (earth mover's distance, EMD) based domain divergence measure and use it as minimization criterion to adapt the parameters of our sensor artifact simulation from synthetic to real images. With the optimized sensor parameters applied to the synthetic images for training, the mIoU of a semantic segmentation network (DeeplabV3+) solely trained on synthetic images is increased from 40.36% to 47.63%.

Cite

CITATION STYLE

APA

Hagn, K., & Grau, O. (2021). Improved Sensor Model for Realistic Synthetic Data Generation. In Proceedings - CSCS 2021: ACM Computer Science in Cars Symposium. Association for Computing Machinery, Inc. https://doi.org/10.1145/3488904.3493383

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free