In order to prevent traffic accidents due to mistakes in checking road signs, a method for detecting road signs from an image shot by an in-vehicle camera has been developed. On the other hand, Deep Learning which is frequently used in recent years requires preparing a large amount of training data, and it is difficult to photograph road signs from various directions at various places. In this research, we propose a method for generating training images for Deep Learning using 3D urban model simulation for detecting road signs. The appearance of road signs taken in the simulation depends on the distance and direction from the camera and the brightness of the scene. These changes were applied to Japanese road signs, and 303,750 types of sign images and their mask areas were automatically generated and used for training. As a result of training YOLO detectors using these training images, in detection for some road sign class groups, the F values of 66.7% to 88.9% could be obtained.
CITATION STYLE
Kato, R., Nishiguchi, S., Hashimoto, W., & Mizutani, Y. (2018). Generating training images using a 3D city model for road sign detection. In Communications in Computer and Information Science (Vol. 852, pp. 381–386). Springer Verlag. https://doi.org/10.1007/978-3-319-92285-0_51
Mendeley helps you to discover research relevant for your work.