Adversarial Dataset Augmentation Using Reinforcement Learning and 3D Modeling

8Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An extensive and diverse dataset is a crucial requirement for the successful training of a deep neural network. Compared to on-site data collection, 3D modeling allows to generate large datasets faster and cheaper. Still, the diversity and perceptual realism of synthetic images remain in the realm of a 3D artist’s experience. Moreover, hard sample mining with 3D modeling poses an open question: which synthetic images are challenging for an object detection model? We present an Adversarial 3D modeling framework for training an object detection model against a reinforcement learning-based adversarial controller. The controller alters the 3D simulator parameters to generate complex synthetic images. The controller aims to minimize the score of the object detection model during the training time. We hypothesize that such an objective of the controller allows to maximize the score of the detection model during inference on real-world data. We evaluate our approach by training a YOLOv3 object detection model using our adversarial framework. A comparison with a similar model trained on random synthetic and real images proves that our framework allows us to achieve better performance than using random real or synthetic data.

Cite

CITATION STYLE

APA

Kniaz, V. V., Knyaz, V. A., Mizginov, V., Papazyan, A., Fomin, N., & Grodzitsky, L. (2021). Adversarial Dataset Augmentation Using Reinforcement Learning and 3D Modeling. In Studies in Computational Intelligence (Vol. 925 SCI, pp. 316–329). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60577-3_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free