Generating training images using a 3D city model for road sign detection

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In order to prevent traffic accidents due to mistakes in checking road signs, a method for detecting road signs from an image shot by an in-vehicle camera has been developed. On the other hand, Deep Learning which is frequently used in recent years requires preparing a large amount of training data, and it is difficult to photograph road signs from various directions at various places. In this research, we propose a method for generating training images for Deep Learning using 3D urban model simulation for detecting road signs. The appearance of road signs taken in the simulation depends on the distance and direction from the camera and the brightness of the scene. These changes were applied to Japanese road signs, and 303,750 types of sign images and their mask areas were automatically generated and used for training. As a result of training YOLO detectors using these training images, in detection for some road sign class groups, the F values of 66.7% to 88.9% could be obtained.

Cite

CITATION STYLE

APA

Kato, R., Nishiguchi, S., Hashimoto, W., & Mizutani, Y. (2018). Generating training images using a 3D city model for road sign detection. In Communications in Computer and Information Science (Vol. 852, pp. 381–386). Springer Verlag. https://doi.org/10.1007/978-3-319-92285-0_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free