A dataset of aerial urban traffic images and their semantic segmentation is presented to be used to train computer vision algorithms, among which those based on convolutional neural networks stand out. This article explains the process of creating the complete dataset, which includes the acquisition of the images, the labeling of vehicles, pedestrians, and pedestrian crossings as well as a description of the structure and content of the dataset (which amounts to 8694 images including visible images and those corresponding to the semantic segmentation). The images were generated using the CARLA simulator (but were like those that could be obtained with fixed aerial cameras or by using multi-copter drones) in the field of intelligent transportation management. The presented dataset is available and accessible to improve the performance of vision and road traffic management systems, especially for the detection of incorrect or dangerous maneuvers. Dataset: The data presented in this study are openly available at: https://zenodo.org/doi/10.5281/zenodo.10058944 (accessed on 17 December 2023) with DOI: 10.5281/zenodo.10058944. Dataset License: CC-BY-4.0.
CITATION STYLE
Rosende, S. B., Gavilán, D. S. J., Fernández-Andrés, J., & Sánchez-Soriano, J. (2024). An Urban Traffic Dataset Composed of Visible Images and Their Semantic Segmentation Generated by the CARLA Simulator. Data, 9(1). https://doi.org/10.3390/data9010004
Mendeley helps you to discover research relevant for your work.